00:00:00.001 Started by upstream project "autotest-per-patch" build number 130844 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.033 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.055 Using shallow fetch with depth 1 00:00:00.055 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.055 > git --version # timeout=10 00:00:00.078 > git --version # 'git version 2.39.2' 00:00:00.078 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.107 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.107 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.256 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.268 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.282 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:05.282 > git config core.sparsecheckout # timeout=10 00:00:05.295 > git read-tree -mu HEAD # timeout=10 00:00:05.315 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:05.340 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:05.340 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:05.460 [Pipeline] Start of Pipeline 00:00:05.470 [Pipeline] library 00:00:05.471 Loading library shm_lib@master 00:00:05.471 Library shm_lib@master is cached. Copying from home. 00:00:05.486 [Pipeline] node 00:00:20.505 Still waiting to schedule task 00:00:20.505 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:17.581 Running on VM-host-SM4 in /var/jenkins/workspace/raid-vg-autotest 00:11:17.583 [Pipeline] { 00:11:17.596 [Pipeline] catchError 00:11:17.599 [Pipeline] { 00:11:17.614 [Pipeline] wrap 00:11:17.624 [Pipeline] { 00:11:17.634 [Pipeline] stage 00:11:17.636 [Pipeline] { (Prologue) 00:11:17.657 [Pipeline] echo 00:11:17.659 Node: VM-host-SM4 00:11:17.667 [Pipeline] cleanWs 00:11:17.676 [WS-CLEANUP] Deleting project workspace... 00:11:17.676 [WS-CLEANUP] Deferred wipeout is used... 00:11:17.683 [WS-CLEANUP] done 00:11:17.887 [Pipeline] setCustomBuildProperty 00:11:17.994 [Pipeline] httpRequest 00:11:18.398 [Pipeline] echo 00:11:18.400 Sorcerer 10.211.164.101 is alive 00:11:18.412 [Pipeline] retry 00:11:18.414 [Pipeline] { 00:11:18.433 [Pipeline] httpRequest 00:11:18.458 HttpMethod: GET 00:11:18.459 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:11:18.460 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:11:18.461 Response Code: HTTP/1.1 200 OK 00:11:18.462 Success: Status code 200 is in the accepted range: 200,404 00:11:18.463 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:11:18.589 [Pipeline] } 00:11:18.609 [Pipeline] // retry 00:11:18.618 [Pipeline] sh 00:11:18.899 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:11:18.915 [Pipeline] httpRequest 00:11:19.317 [Pipeline] echo 00:11:19.319 Sorcerer 10.211.164.101 is alive 00:11:19.329 [Pipeline] retry 00:11:19.331 [Pipeline] { 00:11:19.347 [Pipeline] httpRequest 00:11:19.352 HttpMethod: GET 00:11:19.352 URL: http://10.211.164.101/packages/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:11:19.353 Sending request to url: http://10.211.164.101/packages/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:11:19.355 Response Code: HTTP/1.1 200 OK 00:11:19.355 Success: Status code 200 is in the accepted range: 200,404 00:11:19.356 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:11:21.583 [Pipeline] } 00:11:21.602 [Pipeline] // retry 00:11:21.611 [Pipeline] sh 00:11:21.892 + tar --no-same-owner -xf spdk_70750b651cc13e3ec3582e9cc45a97f7a1da6059.tar.gz 00:11:25.229 [Pipeline] sh 00:11:25.537 + git -C spdk log --oneline -n5 00:11:25.537 70750b651 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:11:25.537 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:11:25.537 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:11:25.537 82c46626a lib/event: implement scheduler trace events 00:11:25.537 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:11:25.560 [Pipeline] writeFile 00:11:25.578 [Pipeline] sh 00:11:25.865 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:25.875 [Pipeline] sh 00:11:26.155 + cat autorun-spdk.conf 00:11:26.155 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:26.155 SPDK_RUN_ASAN=1 00:11:26.155 SPDK_RUN_UBSAN=1 00:11:26.155 SPDK_TEST_RAID=1 00:11:26.156 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:26.282 RUN_NIGHTLY=0 00:11:26.284 [Pipeline] } 00:11:26.300 [Pipeline] // stage 00:11:26.315 [Pipeline] stage 00:11:26.317 [Pipeline] { (Run VM) 00:11:26.331 [Pipeline] sh 00:11:26.610 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:26.610 + echo 'Start stage prepare_nvme.sh' 00:11:26.610 Start stage prepare_nvme.sh 00:11:26.610 + [[ -n 6 ]] 00:11:26.610 + disk_prefix=ex6 00:11:26.610 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:11:26.610 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:11:26.610 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:11:26.610 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:26.610 ++ SPDK_RUN_ASAN=1 00:11:26.610 ++ SPDK_RUN_UBSAN=1 00:11:26.610 ++ SPDK_TEST_RAID=1 00:11:26.610 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:26.610 ++ RUN_NIGHTLY=0 00:11:26.610 + cd /var/jenkins/workspace/raid-vg-autotest 00:11:26.610 + nvme_files=() 00:11:26.610 + declare -A nvme_files 00:11:26.610 + backend_dir=/var/lib/libvirt/images/backends 00:11:26.610 + nvme_files['nvme.img']=5G 00:11:26.610 + nvme_files['nvme-cmb.img']=5G 00:11:26.610 + nvme_files['nvme-multi0.img']=4G 00:11:26.610 + nvme_files['nvme-multi1.img']=4G 00:11:26.610 + nvme_files['nvme-multi2.img']=4G 00:11:26.610 + nvme_files['nvme-openstack.img']=8G 00:11:26.610 + nvme_files['nvme-zns.img']=5G 00:11:26.610 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:26.610 + (( SPDK_TEST_FTL == 1 )) 00:11:26.610 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:26.610 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:26.610 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:11:26.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:26.611 + for nvme in "${!nvme_files[@]}" 00:11:26.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:11:26.870 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:26.870 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:11:26.870 + echo 'End stage prepare_nvme.sh' 00:11:26.870 End stage prepare_nvme.sh 00:11:26.883 [Pipeline] sh 00:11:27.164 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:27.165 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:11:27.165 00:11:27.165 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:11:27.165 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:11:27.165 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:11:27.165 HELP=0 00:11:27.165 DRY_RUN=0 00:11:27.165 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:11:27.165 NVME_DISKS_TYPE=nvme,nvme, 00:11:27.165 NVME_AUTO_CREATE=0 00:11:27.165 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:11:27.165 NVME_CMB=,, 00:11:27.165 NVME_PMR=,, 00:11:27.165 NVME_ZNS=,, 00:11:27.165 NVME_MS=,, 00:11:27.165 NVME_FDP=,, 00:11:27.165 SPDK_VAGRANT_DISTRO=fedora39 00:11:27.165 SPDK_VAGRANT_VMCPU=10 00:11:27.165 SPDK_VAGRANT_VMRAM=12288 00:11:27.165 SPDK_VAGRANT_PROVIDER=libvirt 00:11:27.165 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:11:27.165 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:27.165 SPDK_OPENSTACK_NETWORK=0 00:11:27.165 VAGRANT_PACKAGE_BOX=0 00:11:27.165 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:27.165 FORCE_DISTRO=true 00:11:27.165 VAGRANT_BOX_VERSION= 00:11:27.165 EXTRA_VAGRANTFILES= 00:11:27.165 NIC_MODEL=e1000 00:11:27.165 00:11:27.165 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:11:27.165 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:11:31.414 Bringing machine 'default' up with 'libvirt' provider... 00:11:31.675 ==> default: Creating image (snapshot of base box volume). 00:11:31.933 ==> default: Creating domain with the following settings... 00:11:31.933 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728286350_5b2b0031ae7648d18913 00:11:31.933 ==> default: -- Domain type: kvm 00:11:31.933 ==> default: -- Cpus: 10 00:11:31.933 ==> default: -- Feature: acpi 00:11:31.933 ==> default: -- Feature: apic 00:11:31.933 ==> default: -- Feature: pae 00:11:31.933 ==> default: -- Memory: 12288M 00:11:31.933 ==> default: -- Memory Backing: hugepages: 00:11:31.933 ==> default: -- Management MAC: 00:11:31.933 ==> default: -- Loader: 00:11:31.933 ==> default: -- Nvram: 00:11:31.933 ==> default: -- Base box: spdk/fedora39 00:11:31.933 ==> default: -- Storage pool: default 00:11:31.933 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728286350_5b2b0031ae7648d18913.img (20G) 00:11:31.933 ==> default: -- Volume Cache: default 00:11:31.933 ==> default: -- Kernel: 00:11:31.933 ==> default: -- Initrd: 00:11:31.933 ==> default: -- Graphics Type: vnc 00:11:31.933 ==> default: -- Graphics Port: -1 00:11:31.933 ==> default: -- Graphics IP: 127.0.0.1 00:11:31.933 ==> default: -- Graphics Password: Not defined 00:11:31.933 ==> default: -- Video Type: cirrus 00:11:31.933 ==> default: -- Video VRAM: 9216 00:11:31.933 ==> default: -- Sound Type: 00:11:31.933 ==> default: -- Keymap: en-us 00:11:31.933 ==> default: -- TPM Path: 00:11:31.933 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:31.933 ==> default: -- Command line args: 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:31.933 ==> default: -> value=-drive, 00:11:31.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:31.933 ==> default: -> value=-drive, 00:11:31.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:31.933 ==> default: -> value=-drive, 00:11:31.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:31.933 ==> default: -> value=-drive, 00:11:31.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:31.933 ==> default: -> value=-device, 00:11:31.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:32.191 ==> default: Creating shared folders metadata... 00:11:32.191 ==> default: Starting domain. 00:11:34.145 ==> default: Waiting for domain to get an IP address... 00:11:52.287 ==> default: Waiting for SSH to become available... 00:11:52.287 ==> default: Configuring and enabling network interfaces... 00:11:55.624 default: SSH address: 192.168.121.92:22 00:11:55.624 default: SSH username: vagrant 00:11:55.624 default: SSH auth method: private key 00:11:57.524 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:05.636 ==> default: Mounting SSHFS shared folder... 00:12:07.008 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:12:07.008 ==> default: Checking Mount.. 00:12:08.396 ==> default: Folder Successfully Mounted! 00:12:08.396 ==> default: Running provisioner: file... 00:12:09.326 default: ~/.gitconfig => .gitconfig 00:12:09.891 00:12:09.891 SUCCESS! 00:12:09.891 00:12:09.891 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:12:09.891 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:09.891 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:12:09.891 00:12:09.897 [Pipeline] } 00:12:09.912 [Pipeline] // stage 00:12:09.920 [Pipeline] dir 00:12:09.920 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:12:09.922 [Pipeline] { 00:12:09.935 [Pipeline] catchError 00:12:09.937 [Pipeline] { 00:12:09.949 [Pipeline] sh 00:12:10.228 + + vagrant ssh-config --hostsed vagrant -ne 00:12:10.228 /^Host/,$p 00:12:10.228 + tee ssh_conf 00:12:14.475 Host vagrant 00:12:14.475 HostName 192.168.121.92 00:12:14.475 User vagrant 00:12:14.475 Port 22 00:12:14.475 UserKnownHostsFile /dev/null 00:12:14.475 StrictHostKeyChecking no 00:12:14.475 PasswordAuthentication no 00:12:14.475 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:12:14.475 IdentitiesOnly yes 00:12:14.475 LogLevel FATAL 00:12:14.475 ForwardAgent yes 00:12:14.475 ForwardX11 yes 00:12:14.475 00:12:14.490 [Pipeline] withEnv 00:12:14.492 [Pipeline] { 00:12:14.507 [Pipeline] sh 00:12:14.788 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:14.788 source /etc/os-release 00:12:14.788 [[ -e /image.version ]] && img=$(< /image.version) 00:12:14.788 # Minimal, systemd-like check. 00:12:14.788 if [[ -e /.dockerenv ]]; then 00:12:14.788 # Clear garbage from the node's name: 00:12:14.788 # agt-er_autotest_547-896 -> autotest_547-896 00:12:14.788 # $HOSTNAME is the actual container id 00:12:14.788 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:14.788 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:12:14.788 # We can assume this is a mount from a host where container is running, 00:12:14.788 # so fetch its hostname to easily identify the target swarm worker. 00:12:14.788 container="$(< /etc/hostname) ($agent)" 00:12:14.788 else 00:12:14.788 # Fallback 00:12:14.788 container=$agent 00:12:14.788 fi 00:12:14.788 fi 00:12:14.788 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:14.788 00:12:15.061 [Pipeline] } 00:12:15.077 [Pipeline] // withEnv 00:12:15.087 [Pipeline] setCustomBuildProperty 00:12:15.104 [Pipeline] stage 00:12:15.107 [Pipeline] { (Tests) 00:12:15.126 [Pipeline] sh 00:12:15.410 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:15.735 [Pipeline] sh 00:12:16.108 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:12:16.385 [Pipeline] timeout 00:12:16.386 Timeout set to expire in 1 hr 30 min 00:12:16.389 [Pipeline] { 00:12:16.407 [Pipeline] sh 00:12:16.687 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:17.255 HEAD is now at 70750b651 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:12:17.267 [Pipeline] sh 00:12:17.546 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:17.819 [Pipeline] sh 00:12:18.151 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:18.426 [Pipeline] sh 00:12:18.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:12:18.966 ++ readlink -f spdk_repo 00:12:18.966 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:18.966 + [[ -n /home/vagrant/spdk_repo ]] 00:12:18.966 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:18.966 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:18.966 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:18.966 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:18.966 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:18.966 + [[ raid-vg-autotest == pkgdep-* ]] 00:12:18.966 + cd /home/vagrant/spdk_repo 00:12:18.966 + source /etc/os-release 00:12:18.966 ++ NAME='Fedora Linux' 00:12:18.966 ++ VERSION='39 (Cloud Edition)' 00:12:18.966 ++ ID=fedora 00:12:18.966 ++ VERSION_ID=39 00:12:18.966 ++ VERSION_CODENAME= 00:12:18.966 ++ PLATFORM_ID=platform:f39 00:12:18.966 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:12:18.966 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:18.966 ++ LOGO=fedora-logo-icon 00:12:18.966 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:12:18.966 ++ HOME_URL=https://fedoraproject.org/ 00:12:18.966 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:12:18.967 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:18.967 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:18.967 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:18.967 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:12:18.967 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:18.967 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:12:18.967 ++ SUPPORT_END=2024-11-12 00:12:18.967 ++ VARIANT='Cloud Edition' 00:12:18.967 ++ VARIANT_ID=cloud 00:12:18.967 + uname -a 00:12:18.967 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:12:18.967 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:19.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.533 Hugepages 00:12:19.533 node hugesize free / total 00:12:19.533 node0 1048576kB 0 / 0 00:12:19.533 node0 2048kB 0 / 0 00:12:19.533 00:12:19.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:19.533 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:19.533 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:19.533 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:12:19.533 + rm -f /tmp/spdk-ld-path 00:12:19.533 + source autorun-spdk.conf 00:12:19.533 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:19.533 ++ SPDK_RUN_ASAN=1 00:12:19.533 ++ SPDK_RUN_UBSAN=1 00:12:19.533 ++ SPDK_TEST_RAID=1 00:12:19.533 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:19.533 ++ RUN_NIGHTLY=0 00:12:19.533 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:19.533 + [[ -n '' ]] 00:12:19.533 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:19.533 + for M in /var/spdk/build-*-manifest.txt 00:12:19.533 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:12:19.533 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:19.533 + for M in /var/spdk/build-*-manifest.txt 00:12:19.533 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:19.533 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:19.533 + for M in /var/spdk/build-*-manifest.txt 00:12:19.533 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:19.533 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:19.533 ++ uname 00:12:19.533 + [[ Linux == \L\i\n\u\x ]] 00:12:19.533 + sudo dmesg -T 00:12:19.533 + sudo dmesg --clear 00:12:19.533 + dmesg_pid=5260 00:12:19.533 + [[ Fedora Linux == FreeBSD ]] 00:12:19.533 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.533 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.533 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:19.533 + sudo dmesg -Tw 00:12:19.533 + [[ -x /usr/src/fio-static/fio ]] 00:12:19.533 + export FIO_BIN=/usr/src/fio-static/fio 00:12:19.533 + FIO_BIN=/usr/src/fio-static/fio 00:12:19.533 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:19.533 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:19.533 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:19.533 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.533 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.533 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:19.533 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.533 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.533 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:19.533 Test configuration: 00:12:19.533 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:19.533 SPDK_RUN_ASAN=1 00:12:19.533 SPDK_RUN_UBSAN=1 00:12:19.533 SPDK_TEST_RAID=1 00:12:19.533 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:19.792 RUN_NIGHTLY=0 07:33:19 -- common/autotest_common.sh@1625 -- $ [[ n == y ]] 00:12:19.792 07:33:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.792 07:33:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:12:19.792 07:33:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:19.792 07:33:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.792 07:33:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.792 07:33:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.792 07:33:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.792 07:33:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.792 07:33:19 -- paths/export.sh@5 -- $ export PATH 00:12:19.792 07:33:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.792 07:33:19 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:19.792 07:33:19 -- common/autobuild_common.sh@486 -- $ date +%s 00:12:19.792 07:33:19 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728286399.XXXXXX 00:12:19.792 07:33:19 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728286399.37Rqj3 00:12:19.792 07:33:19 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:12:19.792 07:33:19 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:12:19.792 07:33:19 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:19.792 07:33:19 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:19.792 07:33:19 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:19.792 07:33:19 -- common/autobuild_common.sh@502 -- $ get_config_params 00:12:19.792 07:33:19 -- common/autotest_common.sh@410 -- $ xtrace_disable 00:12:19.792 07:33:19 -- common/autotest_common.sh@10 -- $ set +x 00:12:19.792 07:33:19 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:12:19.792 07:33:19 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:12:19.792 07:33:19 -- pm/common@17 -- $ local monitor 00:12:19.792 07:33:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:19.792 07:33:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:19.792 07:33:19 -- pm/common@25 -- $ sleep 1 00:12:19.792 07:33:19 -- pm/common@21 -- $ date +%s 00:12:19.792 07:33:19 -- pm/common@21 -- $ date +%s 00:12:19.792 07:33:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728286399 00:12:19.792 07:33:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728286399 00:12:19.792 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728286399_collect-cpu-load.pm.log 00:12:19.792 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728286399_collect-vmstat.pm.log 00:12:20.797 07:33:20 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:12:20.797 07:33:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:20.797 07:33:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:20.797 07:33:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:20.797 07:33:20 -- spdk/autobuild.sh@16 -- $ date -u 00:12:20.797 Mon Oct 7 07:33:20 AM UTC 2024 00:12:20.797 07:33:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:20.797 v25.01-pre-36-g70750b651 00:12:20.797 07:33:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:12:20.797 07:33:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:12:20.797 07:33:20 -- common/autotest_common.sh@1104 -- $ '[' 3 -le 1 ']' 00:12:20.797 07:33:20 -- common/autotest_common.sh@1110 -- $ xtrace_disable 00:12:20.797 07:33:20 -- common/autotest_common.sh@10 -- $ set +x 00:12:20.797 ************************************ 00:12:20.797 START TEST asan 00:12:20.797 ************************************ 00:12:20.797 using asan 00:12:20.797 07:33:20 asan -- common/autotest_common.sh@1128 -- $ echo 'using asan' 00:12:20.797 00:12:20.797 real 0m0.001s 00:12:20.797 user 0m0.001s 00:12:20.797 sys 0m0.000s 00:12:20.797 07:33:20 asan -- common/autotest_common.sh@1129 -- $ xtrace_disable 00:12:20.797 07:33:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:12:20.797 ************************************ 00:12:20.797 END TEST asan 00:12:20.797 ************************************ 00:12:20.797 07:33:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:20.797 07:33:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:20.797 07:33:20 -- common/autotest_common.sh@1104 -- $ '[' 3 -le 1 ']' 00:12:20.797 07:33:20 -- common/autotest_common.sh@1110 -- $ xtrace_disable 00:12:20.797 07:33:20 -- common/autotest_common.sh@10 -- $ set +x 00:12:20.797 ************************************ 00:12:20.797 START TEST ubsan 00:12:20.797 ************************************ 00:12:20.797 using ubsan 00:12:20.797 07:33:20 ubsan -- common/autotest_common.sh@1128 -- $ echo 'using ubsan' 00:12:20.797 00:12:20.797 real 0m0.000s 00:12:20.797 user 0m0.000s 00:12:20.797 sys 0m0.000s 00:12:20.797 07:33:20 ubsan -- common/autotest_common.sh@1129 -- $ xtrace_disable 00:12:20.797 07:33:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:12:20.797 ************************************ 00:12:20.797 END TEST ubsan 00:12:20.797 ************************************ 00:12:20.797 07:33:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:20.797 07:33:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:20.797 07:33:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:20.797 07:33:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:12:21.056 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:21.056 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:21.622 Using 'verbs' RDMA provider 00:12:37.494 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:52.371 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:52.371 Creating mk/config.mk...done. 00:12:52.371 Creating mk/cc.flags.mk...done. 00:12:52.371 Type 'make' to build. 00:12:52.371 07:33:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:12:52.371 07:33:50 -- common/autotest_common.sh@1104 -- $ '[' 3 -le 1 ']' 00:12:52.371 07:33:50 -- common/autotest_common.sh@1110 -- $ xtrace_disable 00:12:52.371 07:33:50 -- common/autotest_common.sh@10 -- $ set +x 00:12:52.371 ************************************ 00:12:52.371 START TEST make 00:12:52.371 ************************************ 00:12:52.371 07:33:50 make -- common/autotest_common.sh@1128 -- $ make -j10 00:12:52.371 make[1]: Nothing to be done for 'all'. 00:13:04.590 The Meson build system 00:13:04.590 Version: 1.5.0 00:13:04.590 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:04.590 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:04.590 Build type: native build 00:13:04.590 Program cat found: YES (/usr/bin/cat) 00:13:04.590 Project name: DPDK 00:13:04.590 Project version: 24.03.0 00:13:04.590 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:04.590 C linker for the host machine: cc ld.bfd 2.40-14 00:13:04.590 Host machine cpu family: x86_64 00:13:04.590 Host machine cpu: x86_64 00:13:04.590 Message: ## Building in Developer Mode ## 00:13:04.590 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:04.590 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:04.590 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:04.590 Program python3 found: YES (/usr/bin/python3) 00:13:04.590 Program cat found: YES (/usr/bin/cat) 00:13:04.590 Compiler for C supports arguments -march=native: YES 00:13:04.590 Checking for size of "void *" : 8 00:13:04.590 Checking for size of "void *" : 8 (cached) 00:13:04.590 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:13:04.590 Library m found: YES 00:13:04.590 Library numa found: YES 00:13:04.590 Has header "numaif.h" : YES 00:13:04.590 Library fdt found: NO 00:13:04.590 Library execinfo found: NO 00:13:04.590 Has header "execinfo.h" : YES 00:13:04.590 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:04.590 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:04.590 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:04.590 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:04.590 Run-time dependency openssl found: YES 3.1.1 00:13:04.590 Run-time dependency libpcap found: YES 1.10.4 00:13:04.590 Has header "pcap.h" with dependency libpcap: YES 00:13:04.590 Compiler for C supports arguments -Wcast-qual: YES 00:13:04.590 Compiler for C supports arguments -Wdeprecated: YES 00:13:04.590 Compiler for C supports arguments -Wformat: YES 00:13:04.590 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:04.590 Compiler for C supports arguments -Wformat-security: NO 00:13:04.590 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:04.590 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:04.590 Compiler for C supports arguments -Wnested-externs: YES 00:13:04.590 Compiler for C supports arguments -Wold-style-definition: YES 00:13:04.590 Compiler for C supports arguments -Wpointer-arith: YES 00:13:04.590 Compiler for C supports arguments -Wsign-compare: YES 00:13:04.590 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:04.590 Compiler for C supports arguments -Wundef: YES 00:13:04.590 Compiler for C supports arguments -Wwrite-strings: YES 00:13:04.590 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:04.590 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:04.590 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:04.590 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:04.590 Program objdump found: YES (/usr/bin/objdump) 00:13:04.590 Compiler for C supports arguments -mavx512f: YES 00:13:04.590 Checking if "AVX512 checking" compiles: YES 00:13:04.590 Fetching value of define "__SSE4_2__" : 1 00:13:04.590 Fetching value of define "__AES__" : 1 00:13:04.590 Fetching value of define "__AVX__" : 1 00:13:04.590 Fetching value of define "__AVX2__" : 1 00:13:04.590 Fetching value of define "__AVX512BW__" : 1 00:13:04.590 Fetching value of define "__AVX512CD__" : 1 00:13:04.590 Fetching value of define "__AVX512DQ__" : 1 00:13:04.590 Fetching value of define "__AVX512F__" : 1 00:13:04.590 Fetching value of define "__AVX512VL__" : 1 00:13:04.590 Fetching value of define "__PCLMUL__" : 1 00:13:04.590 Fetching value of define "__RDRND__" : 1 00:13:04.590 Fetching value of define "__RDSEED__" : 1 00:13:04.590 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:04.590 Fetching value of define "__znver1__" : (undefined) 00:13:04.590 Fetching value of define "__znver2__" : (undefined) 00:13:04.590 Fetching value of define "__znver3__" : (undefined) 00:13:04.590 Fetching value of define "__znver4__" : (undefined) 00:13:04.590 Library asan found: YES 00:13:04.590 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:04.590 Message: lib/log: Defining dependency "log" 00:13:04.590 Message: lib/kvargs: Defining dependency "kvargs" 00:13:04.590 Message: lib/telemetry: Defining dependency "telemetry" 00:13:04.590 Library rt found: YES 00:13:04.590 Checking for function "getentropy" : NO 00:13:04.590 Message: lib/eal: Defining dependency "eal" 00:13:04.590 Message: lib/ring: Defining dependency "ring" 00:13:04.590 Message: lib/rcu: Defining dependency "rcu" 00:13:04.590 Message: lib/mempool: Defining dependency "mempool" 00:13:04.590 Message: lib/mbuf: Defining dependency "mbuf" 00:13:04.590 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:04.590 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:04.590 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:04.590 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:04.590 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:04.590 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:13:04.590 Compiler for C supports arguments -mpclmul: YES 00:13:04.590 Compiler for C supports arguments -maes: YES 00:13:04.590 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:04.590 Compiler for C supports arguments -mavx512bw: YES 00:13:04.590 Compiler for C supports arguments -mavx512dq: YES 00:13:04.590 Compiler for C supports arguments -mavx512vl: YES 00:13:04.590 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:04.590 Compiler for C supports arguments -mavx2: YES 00:13:04.590 Compiler for C supports arguments -mavx: YES 00:13:04.590 Message: lib/net: Defining dependency "net" 00:13:04.590 Message: lib/meter: Defining dependency "meter" 00:13:04.590 Message: lib/ethdev: Defining dependency "ethdev" 00:13:04.590 Message: lib/pci: Defining dependency "pci" 00:13:04.590 Message: lib/cmdline: Defining dependency "cmdline" 00:13:04.590 Message: lib/hash: Defining dependency "hash" 00:13:04.590 Message: lib/timer: Defining dependency "timer" 00:13:04.590 Message: lib/compressdev: Defining dependency "compressdev" 00:13:04.590 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:04.590 Message: lib/dmadev: Defining dependency "dmadev" 00:13:04.590 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:04.590 Message: lib/power: Defining dependency "power" 00:13:04.590 Message: lib/reorder: Defining dependency "reorder" 00:13:04.590 Message: lib/security: Defining dependency "security" 00:13:04.590 Has header "linux/userfaultfd.h" : YES 00:13:04.590 Has header "linux/vduse.h" : YES 00:13:04.590 Message: lib/vhost: Defining dependency "vhost" 00:13:04.590 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:04.590 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:04.590 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:04.590 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:04.590 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:04.590 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:04.590 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:04.590 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:04.590 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:04.590 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:04.590 Program doxygen found: YES (/usr/local/bin/doxygen) 00:13:04.590 Configuring doxy-api-html.conf using configuration 00:13:04.590 Configuring doxy-api-man.conf using configuration 00:13:04.590 Program mandb found: YES (/usr/bin/mandb) 00:13:04.590 Program sphinx-build found: NO 00:13:04.590 Configuring rte_build_config.h using configuration 00:13:04.590 Message: 00:13:04.590 ================= 00:13:04.590 Applications Enabled 00:13:04.590 ================= 00:13:04.590 00:13:04.590 apps: 00:13:04.590 00:13:04.590 00:13:04.590 Message: 00:13:04.590 ================= 00:13:04.590 Libraries Enabled 00:13:04.590 ================= 00:13:04.590 00:13:04.590 libs: 00:13:04.590 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:04.590 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:04.590 cryptodev, dmadev, power, reorder, security, vhost, 00:13:04.590 00:13:04.590 Message: 00:13:04.590 =============== 00:13:04.590 Drivers Enabled 00:13:04.590 =============== 00:13:04.590 00:13:04.590 common: 00:13:04.590 00:13:04.590 bus: 00:13:04.590 pci, vdev, 00:13:04.590 mempool: 00:13:04.590 ring, 00:13:04.590 dma: 00:13:04.590 00:13:04.590 net: 00:13:04.590 00:13:04.590 crypto: 00:13:04.590 00:13:04.590 compress: 00:13:04.590 00:13:04.590 vdpa: 00:13:04.590 00:13:04.590 00:13:04.590 Message: 00:13:04.590 ================= 00:13:04.591 Content Skipped 00:13:04.591 ================= 00:13:04.591 00:13:04.591 apps: 00:13:04.591 dumpcap: explicitly disabled via build config 00:13:04.591 graph: explicitly disabled via build config 00:13:04.591 pdump: explicitly disabled via build config 00:13:04.591 proc-info: explicitly disabled via build config 00:13:04.591 test-acl: explicitly disabled via build config 00:13:04.591 test-bbdev: explicitly disabled via build config 00:13:04.591 test-cmdline: explicitly disabled via build config 00:13:04.591 test-compress-perf: explicitly disabled via build config 00:13:04.591 test-crypto-perf: explicitly disabled via build config 00:13:04.591 test-dma-perf: explicitly disabled via build config 00:13:04.591 test-eventdev: explicitly disabled via build config 00:13:04.591 test-fib: explicitly disabled via build config 00:13:04.591 test-flow-perf: explicitly disabled via build config 00:13:04.591 test-gpudev: explicitly disabled via build config 00:13:04.591 test-mldev: explicitly disabled via build config 00:13:04.591 test-pipeline: explicitly disabled via build config 00:13:04.591 test-pmd: explicitly disabled via build config 00:13:04.591 test-regex: explicitly disabled via build config 00:13:04.591 test-sad: explicitly disabled via build config 00:13:04.591 test-security-perf: explicitly disabled via build config 00:13:04.591 00:13:04.591 libs: 00:13:04.591 argparse: explicitly disabled via build config 00:13:04.591 metrics: explicitly disabled via build config 00:13:04.591 acl: explicitly disabled via build config 00:13:04.591 bbdev: explicitly disabled via build config 00:13:04.591 bitratestats: explicitly disabled via build config 00:13:04.591 bpf: explicitly disabled via build config 00:13:04.591 cfgfile: explicitly disabled via build config 00:13:04.591 distributor: explicitly disabled via build config 00:13:04.591 efd: explicitly disabled via build config 00:13:04.591 eventdev: explicitly disabled via build config 00:13:04.591 dispatcher: explicitly disabled via build config 00:13:04.591 gpudev: explicitly disabled via build config 00:13:04.591 gro: explicitly disabled via build config 00:13:04.591 gso: explicitly disabled via build config 00:13:04.591 ip_frag: explicitly disabled via build config 00:13:04.591 jobstats: explicitly disabled via build config 00:13:04.591 latencystats: explicitly disabled via build config 00:13:04.591 lpm: explicitly disabled via build config 00:13:04.591 member: explicitly disabled via build config 00:13:04.591 pcapng: explicitly disabled via build config 00:13:04.591 rawdev: explicitly disabled via build config 00:13:04.591 regexdev: explicitly disabled via build config 00:13:04.591 mldev: explicitly disabled via build config 00:13:04.591 rib: explicitly disabled via build config 00:13:04.591 sched: explicitly disabled via build config 00:13:04.591 stack: explicitly disabled via build config 00:13:04.591 ipsec: explicitly disabled via build config 00:13:04.591 pdcp: explicitly disabled via build config 00:13:04.591 fib: explicitly disabled via build config 00:13:04.591 port: explicitly disabled via build config 00:13:04.591 pdump: explicitly disabled via build config 00:13:04.591 table: explicitly disabled via build config 00:13:04.591 pipeline: explicitly disabled via build config 00:13:04.591 graph: explicitly disabled via build config 00:13:04.591 node: explicitly disabled via build config 00:13:04.591 00:13:04.591 drivers: 00:13:04.591 common/cpt: not in enabled drivers build config 00:13:04.591 common/dpaax: not in enabled drivers build config 00:13:04.591 common/iavf: not in enabled drivers build config 00:13:04.591 common/idpf: not in enabled drivers build config 00:13:04.591 common/ionic: not in enabled drivers build config 00:13:04.591 common/mvep: not in enabled drivers build config 00:13:04.591 common/octeontx: not in enabled drivers build config 00:13:04.591 bus/auxiliary: not in enabled drivers build config 00:13:04.591 bus/cdx: not in enabled drivers build config 00:13:04.591 bus/dpaa: not in enabled drivers build config 00:13:04.591 bus/fslmc: not in enabled drivers build config 00:13:04.591 bus/ifpga: not in enabled drivers build config 00:13:04.591 bus/platform: not in enabled drivers build config 00:13:04.591 bus/uacce: not in enabled drivers build config 00:13:04.591 bus/vmbus: not in enabled drivers build config 00:13:04.591 common/cnxk: not in enabled drivers build config 00:13:04.591 common/mlx5: not in enabled drivers build config 00:13:04.591 common/nfp: not in enabled drivers build config 00:13:04.591 common/nitrox: not in enabled drivers build config 00:13:04.591 common/qat: not in enabled drivers build config 00:13:04.591 common/sfc_efx: not in enabled drivers build config 00:13:04.591 mempool/bucket: not in enabled drivers build config 00:13:04.591 mempool/cnxk: not in enabled drivers build config 00:13:04.591 mempool/dpaa: not in enabled drivers build config 00:13:04.591 mempool/dpaa2: not in enabled drivers build config 00:13:04.591 mempool/octeontx: not in enabled drivers build config 00:13:04.591 mempool/stack: not in enabled drivers build config 00:13:04.591 dma/cnxk: not in enabled drivers build config 00:13:04.591 dma/dpaa: not in enabled drivers build config 00:13:04.591 dma/dpaa2: not in enabled drivers build config 00:13:04.591 dma/hisilicon: not in enabled drivers build config 00:13:04.591 dma/idxd: not in enabled drivers build config 00:13:04.591 dma/ioat: not in enabled drivers build config 00:13:04.591 dma/skeleton: not in enabled drivers build config 00:13:04.591 net/af_packet: not in enabled drivers build config 00:13:04.591 net/af_xdp: not in enabled drivers build config 00:13:04.591 net/ark: not in enabled drivers build config 00:13:04.591 net/atlantic: not in enabled drivers build config 00:13:04.591 net/avp: not in enabled drivers build config 00:13:04.591 net/axgbe: not in enabled drivers build config 00:13:04.591 net/bnx2x: not in enabled drivers build config 00:13:04.591 net/bnxt: not in enabled drivers build config 00:13:04.591 net/bonding: not in enabled drivers build config 00:13:04.591 net/cnxk: not in enabled drivers build config 00:13:04.591 net/cpfl: not in enabled drivers build config 00:13:04.591 net/cxgbe: not in enabled drivers build config 00:13:04.591 net/dpaa: not in enabled drivers build config 00:13:04.591 net/dpaa2: not in enabled drivers build config 00:13:04.591 net/e1000: not in enabled drivers build config 00:13:04.591 net/ena: not in enabled drivers build config 00:13:04.591 net/enetc: not in enabled drivers build config 00:13:04.591 net/enetfec: not in enabled drivers build config 00:13:04.591 net/enic: not in enabled drivers build config 00:13:04.591 net/failsafe: not in enabled drivers build config 00:13:04.591 net/fm10k: not in enabled drivers build config 00:13:04.591 net/gve: not in enabled drivers build config 00:13:04.591 net/hinic: not in enabled drivers build config 00:13:04.591 net/hns3: not in enabled drivers build config 00:13:04.591 net/i40e: not in enabled drivers build config 00:13:04.591 net/iavf: not in enabled drivers build config 00:13:04.591 net/ice: not in enabled drivers build config 00:13:04.591 net/idpf: not in enabled drivers build config 00:13:04.591 net/igc: not in enabled drivers build config 00:13:04.591 net/ionic: not in enabled drivers build config 00:13:04.591 net/ipn3ke: not in enabled drivers build config 00:13:04.591 net/ixgbe: not in enabled drivers build config 00:13:04.591 net/mana: not in enabled drivers build config 00:13:04.591 net/memif: not in enabled drivers build config 00:13:04.591 net/mlx4: not in enabled drivers build config 00:13:04.591 net/mlx5: not in enabled drivers build config 00:13:04.591 net/mvneta: not in enabled drivers build config 00:13:04.591 net/mvpp2: not in enabled drivers build config 00:13:04.591 net/netvsc: not in enabled drivers build config 00:13:04.591 net/nfb: not in enabled drivers build config 00:13:04.591 net/nfp: not in enabled drivers build config 00:13:04.591 net/ngbe: not in enabled drivers build config 00:13:04.591 net/null: not in enabled drivers build config 00:13:04.591 net/octeontx: not in enabled drivers build config 00:13:04.591 net/octeon_ep: not in enabled drivers build config 00:13:04.591 net/pcap: not in enabled drivers build config 00:13:04.591 net/pfe: not in enabled drivers build config 00:13:04.591 net/qede: not in enabled drivers build config 00:13:04.591 net/ring: not in enabled drivers build config 00:13:04.591 net/sfc: not in enabled drivers build config 00:13:04.591 net/softnic: not in enabled drivers build config 00:13:04.591 net/tap: not in enabled drivers build config 00:13:04.591 net/thunderx: not in enabled drivers build config 00:13:04.591 net/txgbe: not in enabled drivers build config 00:13:04.591 net/vdev_netvsc: not in enabled drivers build config 00:13:04.591 net/vhost: not in enabled drivers build config 00:13:04.591 net/virtio: not in enabled drivers build config 00:13:04.591 net/vmxnet3: not in enabled drivers build config 00:13:04.591 raw/*: missing internal dependency, "rawdev" 00:13:04.591 crypto/armv8: not in enabled drivers build config 00:13:04.591 crypto/bcmfs: not in enabled drivers build config 00:13:04.591 crypto/caam_jr: not in enabled drivers build config 00:13:04.591 crypto/ccp: not in enabled drivers build config 00:13:04.591 crypto/cnxk: not in enabled drivers build config 00:13:04.591 crypto/dpaa_sec: not in enabled drivers build config 00:13:04.591 crypto/dpaa2_sec: not in enabled drivers build config 00:13:04.591 crypto/ipsec_mb: not in enabled drivers build config 00:13:04.591 crypto/mlx5: not in enabled drivers build config 00:13:04.591 crypto/mvsam: not in enabled drivers build config 00:13:04.591 crypto/nitrox: not in enabled drivers build config 00:13:04.591 crypto/null: not in enabled drivers build config 00:13:04.591 crypto/octeontx: not in enabled drivers build config 00:13:04.591 crypto/openssl: not in enabled drivers build config 00:13:04.591 crypto/scheduler: not in enabled drivers build config 00:13:04.591 crypto/uadk: not in enabled drivers build config 00:13:04.591 crypto/virtio: not in enabled drivers build config 00:13:04.591 compress/isal: not in enabled drivers build config 00:13:04.591 compress/mlx5: not in enabled drivers build config 00:13:04.591 compress/nitrox: not in enabled drivers build config 00:13:04.591 compress/octeontx: not in enabled drivers build config 00:13:04.591 compress/zlib: not in enabled drivers build config 00:13:04.591 regex/*: missing internal dependency, "regexdev" 00:13:04.591 ml/*: missing internal dependency, "mldev" 00:13:04.591 vdpa/ifc: not in enabled drivers build config 00:13:04.591 vdpa/mlx5: not in enabled drivers build config 00:13:04.591 vdpa/nfp: not in enabled drivers build config 00:13:04.591 vdpa/sfc: not in enabled drivers build config 00:13:04.591 event/*: missing internal dependency, "eventdev" 00:13:04.591 baseband/*: missing internal dependency, "bbdev" 00:13:04.591 gpu/*: missing internal dependency, "gpudev" 00:13:04.591 00:13:04.591 00:13:04.592 Build targets in project: 85 00:13:04.592 00:13:04.592 DPDK 24.03.0 00:13:04.592 00:13:04.592 User defined options 00:13:04.592 buildtype : debug 00:13:04.592 default_library : shared 00:13:04.592 libdir : lib 00:13:04.592 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:04.592 b_sanitize : address 00:13:04.592 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:04.592 c_link_args : 00:13:04.592 cpu_instruction_set: native 00:13:04.592 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:04.592 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:04.592 enable_docs : false 00:13:04.592 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:04.592 enable_kmods : false 00:13:04.592 max_lcores : 128 00:13:04.592 tests : false 00:13:04.592 00:13:04.592 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:04.592 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:04.592 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:04.592 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:04.592 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:04.592 [4/268] Linking static target lib/librte_kvargs.a 00:13:04.592 [5/268] Linking static target lib/librte_log.a 00:13:04.592 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:04.850 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:05.108 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.108 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:05.108 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:05.108 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:05.108 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:05.108 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:05.108 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:05.108 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:05.108 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:05.108 [17/268] Linking static target lib/librte_telemetry.a 00:13:05.366 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:05.624 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:05.624 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.624 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:05.882 [22/268] Linking target lib/librte_log.so.24.1 00:13:05.882 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:05.882 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:05.882 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:05.882 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:05.882 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:06.140 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:13:06.140 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:06.140 [30/268] Linking target lib/librte_kvargs.so.24.1 00:13:06.140 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:06.140 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:06.140 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:06.140 [34/268] Linking target lib/librte_telemetry.so.24.1 00:13:06.397 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:13:06.397 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:13:06.397 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:06.397 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:06.397 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:06.655 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:06.655 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:06.655 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:06.655 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:06.655 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:06.655 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:06.913 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:07.172 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:07.172 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:07.172 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:07.172 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:07.172 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:07.172 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:07.430 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:07.430 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:07.430 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:07.689 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:07.689 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:07.963 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:07.963 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:07.963 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:07.963 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:07.963 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:07.963 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:07.963 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:07.963 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:07.963 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:08.529 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:08.529 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:08.529 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:08.529 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:08.787 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:08.787 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:08.787 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:08.787 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:08.787 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:08.787 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:08.787 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:08.787 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:09.044 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:09.044 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:09.044 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:09.044 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:09.301 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:09.301 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:09.301 [85/268] Linking static target lib/librte_ring.a 00:13:09.301 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:09.559 [87/268] Linking static target lib/librte_eal.a 00:13:09.559 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:09.559 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:09.559 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:09.559 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:09.559 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:09.817 [93/268] Linking static target lib/librte_mempool.a 00:13:09.817 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.076 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:10.076 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:10.076 [97/268] Linking static target lib/librte_rcu.a 00:13:10.076 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:10.335 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:10.335 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:10.335 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:10.335 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:10.592 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:10.592 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:10.592 [105/268] Linking static target lib/librte_mbuf.a 00:13:10.592 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.592 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:10.592 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:10.592 [109/268] Linking static target lib/librte_net.a 00:13:10.850 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:10.850 [111/268] Linking static target lib/librte_meter.a 00:13:10.850 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:11.108 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:11.108 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:11.108 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.108 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.366 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.366 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:11.624 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:11.882 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.882 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:11.882 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:11.882 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:12.140 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:12.140 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:12.399 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:12.399 [127/268] Linking static target lib/librte_pci.a 00:13:12.399 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:12.399 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:12.399 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:12.399 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:12.399 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:12.657 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:12.657 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.657 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:13:12.657 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:12.916 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:12.916 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:12.916 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:12.916 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:12.916 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:12.916 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:12.916 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:12.916 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:13:12.916 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:12.916 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:12.916 [147/268] Linking static target lib/librte_cmdline.a 00:13:13.175 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:13.175 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:13.433 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:13.433 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:13.433 [152/268] Linking static target lib/librte_timer.a 00:13:13.433 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:13.433 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:13.691 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:13.691 [156/268] Linking static target lib/librte_ethdev.a 00:13:13.949 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:13.949 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:14.208 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:14.208 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:14.208 [161/268] Linking static target lib/librte_compressdev.a 00:13:14.208 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.208 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:14.466 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:14.724 [165/268] Linking static target lib/librte_hash.a 00:13:14.724 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:14.724 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:14.724 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:14.724 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:14.724 [170/268] Linking static target lib/librte_dmadev.a 00:13:14.982 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:14.982 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.982 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:15.258 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:15.259 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.517 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:15.517 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:15.517 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:15.517 [179/268] Linking static target lib/librte_cryptodev.a 00:13:15.517 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:15.775 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:15.775 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:15.775 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.775 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:15.775 [185/268] Linking static target lib/librte_power.a 00:13:16.033 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.292 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:16.292 [188/268] Linking static target lib/librte_reorder.a 00:13:16.292 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:16.292 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:16.551 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:16.551 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:16.551 [193/268] Linking static target lib/librte_security.a 00:13:16.808 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:17.066 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:17.324 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:17.324 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:17.324 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:17.324 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:17.582 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:17.841 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:17.841 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:17.841 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:18.098 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:18.098 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:18.098 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:18.371 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:18.371 [208/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.371 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:18.371 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:18.371 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:18.634 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:18.634 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:18.634 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:18.634 [215/268] Linking static target drivers/librte_bus_pci.a 00:13:18.634 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:18.634 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:18.634 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:18.634 [219/268] Linking static target drivers/librte_bus_vdev.a 00:13:18.634 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:18.634 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:18.891 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:18.891 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:18.891 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:18.891 [225/268] Linking static target drivers/librte_mempool_ring.a 00:13:19.149 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:19.149 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.083 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:21.459 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:21.459 [230/268] Linking target lib/librte_eal.so.24.1 00:13:21.718 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:13:21.718 [232/268] Linking target lib/librte_ring.so.24.1 00:13:21.718 [233/268] Linking target lib/librte_meter.so.24.1 00:13:21.718 [234/268] Linking target lib/librte_pci.so.24.1 00:13:21.718 [235/268] Linking target lib/librte_timer.so.24.1 00:13:21.718 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:13:21.718 [237/268] Linking target lib/librte_dmadev.so.24.1 00:13:22.003 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:13:22.004 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:13:22.004 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:13:22.004 [241/268] Linking target lib/librte_mempool.so.24.1 00:13:22.004 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:13:22.004 [243/268] Linking target lib/librte_rcu.so.24.1 00:13:22.004 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:13:22.004 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:13:22.004 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:13:22.261 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:13:22.261 [248/268] Linking target lib/librte_mbuf.so.24.1 00:13:22.261 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:13:22.261 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:22.261 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:13:22.261 [252/268] Linking target lib/librte_reorder.so.24.1 00:13:22.261 [253/268] Linking target lib/librte_compressdev.so.24.1 00:13:22.261 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:13:22.519 [255/268] Linking target lib/librte_net.so.24.1 00:13:22.519 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:13:22.519 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:13:22.519 [258/268] Linking target lib/librte_cmdline.so.24.1 00:13:22.519 [259/268] Linking target lib/librte_hash.so.24.1 00:13:22.777 [260/268] Linking target lib/librte_security.so.24.1 00:13:22.777 [261/268] Linking target lib/librte_ethdev.so.24.1 00:13:22.777 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:13:22.777 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:13:22.777 [264/268] Linking target lib/librte_power.so.24.1 00:13:24.150 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:24.150 [266/268] Linking static target lib/librte_vhost.a 00:13:26.053 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:26.053 [268/268] Linking target lib/librte_vhost.so.24.1 00:13:26.053 INFO: autodetecting backend as ninja 00:13:26.053 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:44.169 CC lib/log/log.o 00:13:44.169 CC lib/log/log_flags.o 00:13:44.169 CC lib/log/log_deprecated.o 00:13:44.169 CC lib/ut/ut.o 00:13:44.169 CC lib/ut_mock/mock.o 00:13:44.169 LIB libspdk_log.a 00:13:44.169 LIB libspdk_ut.a 00:13:44.169 LIB libspdk_ut_mock.a 00:13:44.169 SO libspdk_log.so.7.0 00:13:44.169 SO libspdk_ut.so.2.0 00:13:44.169 SO libspdk_ut_mock.so.6.0 00:13:44.169 SYMLINK libspdk_ut.so 00:13:44.169 SYMLINK libspdk_log.so 00:13:44.169 SYMLINK libspdk_ut_mock.so 00:13:44.169 CXX lib/trace_parser/trace.o 00:13:44.169 CC lib/dma/dma.o 00:13:44.169 CC lib/util/base64.o 00:13:44.169 CC lib/util/bit_array.o 00:13:44.169 CC lib/util/crc16.o 00:13:44.169 CC lib/util/crc32.o 00:13:44.169 CC lib/util/cpuset.o 00:13:44.169 CC lib/util/crc32c.o 00:13:44.169 CC lib/ioat/ioat.o 00:13:44.169 CC lib/vfio_user/host/vfio_user_pci.o 00:13:44.169 CC lib/util/crc32_ieee.o 00:13:44.169 CC lib/util/crc64.o 00:13:44.169 CC lib/util/dif.o 00:13:44.169 CC lib/util/fd.o 00:13:44.169 CC lib/util/fd_group.o 00:13:44.169 LIB libspdk_dma.a 00:13:44.169 CC lib/vfio_user/host/vfio_user.o 00:13:44.169 CC lib/util/file.o 00:13:44.169 SO libspdk_dma.so.5.0 00:13:44.169 LIB libspdk_ioat.a 00:13:44.169 CC lib/util/hexlify.o 00:13:44.169 CC lib/util/iov.o 00:13:44.169 SO libspdk_ioat.so.7.0 00:13:44.169 SYMLINK libspdk_dma.so 00:13:44.169 CC lib/util/math.o 00:13:44.169 CC lib/util/net.o 00:13:44.169 SYMLINK libspdk_ioat.so 00:13:44.169 CC lib/util/pipe.o 00:13:44.169 CC lib/util/strerror_tls.o 00:13:44.169 CC lib/util/string.o 00:13:44.169 LIB libspdk_vfio_user.a 00:13:44.169 CC lib/util/uuid.o 00:13:44.169 SO libspdk_vfio_user.so.5.0 00:13:44.169 CC lib/util/xor.o 00:13:44.169 CC lib/util/zipf.o 00:13:44.169 CC lib/util/md5.o 00:13:44.169 SYMLINK libspdk_vfio_user.so 00:13:44.169 LIB libspdk_util.a 00:13:44.169 SO libspdk_util.so.10.0 00:13:44.169 LIB libspdk_trace_parser.a 00:13:44.169 SYMLINK libspdk_util.so 00:13:44.169 SO libspdk_trace_parser.so.6.0 00:13:44.169 SYMLINK libspdk_trace_parser.so 00:13:44.427 CC lib/json/json_parse.o 00:13:44.427 CC lib/json/json_util.o 00:13:44.427 CC lib/json/json_write.o 00:13:44.427 CC lib/env_dpdk/env.o 00:13:44.427 CC lib/env_dpdk/memory.o 00:13:44.427 CC lib/idxd/idxd.o 00:13:44.427 CC lib/conf/conf.o 00:13:44.427 CC lib/rdma_utils/rdma_utils.o 00:13:44.427 CC lib/vmd/vmd.o 00:13:44.427 CC lib/rdma_provider/common.o 00:13:44.685 LIB libspdk_conf.a 00:13:44.685 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:44.685 CC lib/vmd/led.o 00:13:44.685 SO libspdk_conf.so.6.0 00:13:44.685 CC lib/idxd/idxd_user.o 00:13:44.685 LIB libspdk_rdma_utils.a 00:13:44.685 SYMLINK libspdk_conf.so 00:13:44.685 CC lib/idxd/idxd_kernel.o 00:13:44.685 SO libspdk_rdma_utils.so.1.0 00:13:44.685 LIB libspdk_json.a 00:13:44.685 SYMLINK libspdk_rdma_utils.so 00:13:44.685 CC lib/env_dpdk/pci.o 00:13:44.685 CC lib/env_dpdk/init.o 00:13:44.685 SO libspdk_json.so.6.0 00:13:44.943 CC lib/env_dpdk/threads.o 00:13:44.943 SYMLINK libspdk_json.so 00:13:44.943 CC lib/env_dpdk/pci_ioat.o 00:13:44.943 LIB libspdk_rdma_provider.a 00:13:44.943 CC lib/env_dpdk/pci_virtio.o 00:13:44.943 SO libspdk_rdma_provider.so.6.0 00:13:44.943 CC lib/env_dpdk/pci_vmd.o 00:13:44.943 SYMLINK libspdk_rdma_provider.so 00:13:44.943 CC lib/env_dpdk/pci_idxd.o 00:13:44.943 CC lib/env_dpdk/pci_event.o 00:13:45.201 CC lib/env_dpdk/sigbus_handler.o 00:13:45.201 CC lib/env_dpdk/pci_dpdk.o 00:13:45.201 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:45.201 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:45.201 LIB libspdk_idxd.a 00:13:45.201 LIB libspdk_vmd.a 00:13:45.201 SO libspdk_vmd.so.6.0 00:13:45.201 SO libspdk_idxd.so.12.1 00:13:45.459 SYMLINK libspdk_vmd.so 00:13:45.459 SYMLINK libspdk_idxd.so 00:13:45.459 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:45.459 CC lib/jsonrpc/jsonrpc_server.o 00:13:45.459 CC lib/jsonrpc/jsonrpc_client.o 00:13:45.459 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:45.718 LIB libspdk_jsonrpc.a 00:13:45.976 SO libspdk_jsonrpc.so.6.0 00:13:45.976 SYMLINK libspdk_jsonrpc.so 00:13:46.235 LIB libspdk_env_dpdk.a 00:13:46.235 CC lib/rpc/rpc.o 00:13:46.493 SO libspdk_env_dpdk.so.15.0 00:13:46.493 SYMLINK libspdk_env_dpdk.so 00:13:46.493 LIB libspdk_rpc.a 00:13:46.752 SO libspdk_rpc.so.6.0 00:13:46.752 SYMLINK libspdk_rpc.so 00:13:47.010 CC lib/trace/trace.o 00:13:47.010 CC lib/trace/trace_rpc.o 00:13:47.010 CC lib/trace/trace_flags.o 00:13:47.010 CC lib/keyring/keyring.o 00:13:47.010 CC lib/keyring/keyring_rpc.o 00:13:47.010 CC lib/notify/notify_rpc.o 00:13:47.010 CC lib/notify/notify.o 00:13:47.269 LIB libspdk_notify.a 00:13:47.269 LIB libspdk_trace.a 00:13:47.269 SO libspdk_notify.so.6.0 00:13:47.269 LIB libspdk_keyring.a 00:13:47.269 SO libspdk_trace.so.11.0 00:13:47.269 SO libspdk_keyring.so.2.0 00:13:47.536 SYMLINK libspdk_notify.so 00:13:47.536 SYMLINK libspdk_keyring.so 00:13:47.536 SYMLINK libspdk_trace.so 00:13:47.812 CC lib/sock/sock_rpc.o 00:13:47.812 CC lib/sock/sock.o 00:13:47.812 CC lib/thread/iobuf.o 00:13:47.812 CC lib/thread/thread.o 00:13:48.751 LIB libspdk_sock.a 00:13:48.751 SO libspdk_sock.so.10.0 00:13:48.751 SYMLINK libspdk_sock.so 00:13:49.009 CC lib/nvme/nvme_ctrlr.o 00:13:49.009 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:49.009 CC lib/nvme/nvme_ns_cmd.o 00:13:49.009 CC lib/nvme/nvme.o 00:13:49.009 CC lib/nvme/nvme_fabric.o 00:13:49.009 CC lib/nvme/nvme_qpair.o 00:13:49.009 CC lib/nvme/nvme_pcie_common.o 00:13:49.009 CC lib/nvme/nvme_ns.o 00:13:49.009 CC lib/nvme/nvme_pcie.o 00:13:49.576 LIB libspdk_thread.a 00:13:49.834 SO libspdk_thread.so.10.2 00:13:49.834 CC lib/nvme/nvme_quirks.o 00:13:49.834 SYMLINK libspdk_thread.so 00:13:49.834 CC lib/nvme/nvme_transport.o 00:13:49.834 CC lib/nvme/nvme_discovery.o 00:13:49.834 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:50.093 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:50.093 CC lib/nvme/nvme_tcp.o 00:13:50.093 CC lib/nvme/nvme_opal.o 00:13:50.093 CC lib/accel/accel.o 00:13:50.352 CC lib/nvme/nvme_io_msg.o 00:13:50.612 CC lib/blob/blobstore.o 00:13:50.612 CC lib/blob/request.o 00:13:50.612 CC lib/blob/zeroes.o 00:13:50.612 CC lib/init/json_config.o 00:13:50.612 CC lib/init/subsystem.o 00:13:50.871 CC lib/init/subsystem_rpc.o 00:13:50.871 CC lib/init/rpc.o 00:13:50.871 CC lib/blob/blob_bs_dev.o 00:13:50.871 CC lib/nvme/nvme_poll_group.o 00:13:51.242 CC lib/nvme/nvme_zns.o 00:13:51.242 LIB libspdk_init.a 00:13:51.242 SO libspdk_init.so.6.0 00:13:51.242 CC lib/nvme/nvme_stubs.o 00:13:51.242 CC lib/virtio/virtio.o 00:13:51.242 CC lib/fsdev/fsdev.o 00:13:51.242 SYMLINK libspdk_init.so 00:13:51.242 CC lib/fsdev/fsdev_io.o 00:13:51.499 CC lib/virtio/virtio_vhost_user.o 00:13:51.755 CC lib/accel/accel_rpc.o 00:13:51.755 CC lib/virtio/virtio_vfio_user.o 00:13:51.755 CC lib/fsdev/fsdev_rpc.o 00:13:51.755 CC lib/virtio/virtio_pci.o 00:13:51.755 CC lib/accel/accel_sw.o 00:13:51.755 CC lib/nvme/nvme_auth.o 00:13:52.012 CC lib/event/app.o 00:13:52.012 CC lib/event/reactor.o 00:13:52.012 CC lib/event/log_rpc.o 00:13:52.012 CC lib/event/app_rpc.o 00:13:52.012 CC lib/nvme/nvme_cuse.o 00:13:52.012 LIB libspdk_virtio.a 00:13:52.012 LIB libspdk_fsdev.a 00:13:52.012 CC lib/nvme/nvme_rdma.o 00:13:52.012 SO libspdk_virtio.so.7.0 00:13:52.012 SO libspdk_fsdev.so.1.0 00:13:52.268 SYMLINK libspdk_fsdev.so 00:13:52.268 SYMLINK libspdk_virtio.so 00:13:52.268 CC lib/event/scheduler_static.o 00:13:52.268 LIB libspdk_accel.a 00:13:52.268 SO libspdk_accel.so.16.0 00:13:52.524 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:13:52.524 SYMLINK libspdk_accel.so 00:13:52.524 LIB libspdk_event.a 00:13:52.524 SO libspdk_event.so.15.0 00:13:52.780 SYMLINK libspdk_event.so 00:13:52.780 CC lib/bdev/bdev.o 00:13:52.780 CC lib/bdev/bdev_zone.o 00:13:52.780 CC lib/bdev/bdev_rpc.o 00:13:52.780 CC lib/bdev/scsi_nvme.o 00:13:52.781 CC lib/bdev/part.o 00:13:53.346 LIB libspdk_fuse_dispatcher.a 00:13:53.346 SO libspdk_fuse_dispatcher.so.1.0 00:13:53.346 SYMLINK libspdk_fuse_dispatcher.so 00:13:53.911 LIB libspdk_nvme.a 00:13:54.169 SO libspdk_nvme.so.14.0 00:13:54.427 SYMLINK libspdk_nvme.so 00:13:54.684 LIB libspdk_blob.a 00:13:54.684 SO libspdk_blob.so.11.0 00:13:54.685 SYMLINK libspdk_blob.so 00:13:55.251 CC lib/blobfs/blobfs.o 00:13:55.251 CC lib/blobfs/tree.o 00:13:55.251 CC lib/lvol/lvol.o 00:13:56.187 LIB libspdk_blobfs.a 00:13:56.187 LIB libspdk_bdev.a 00:13:56.187 SO libspdk_blobfs.so.10.0 00:13:56.187 SO libspdk_bdev.so.17.0 00:13:56.187 SYMLINK libspdk_blobfs.so 00:13:56.187 LIB libspdk_lvol.a 00:13:56.446 SO libspdk_lvol.so.10.0 00:13:56.446 SYMLINK libspdk_lvol.so 00:13:56.446 SYMLINK libspdk_bdev.so 00:13:56.704 CC lib/ftl/ftl_core.o 00:13:56.704 CC lib/ftl/ftl_init.o 00:13:56.704 CC lib/ftl/ftl_layout.o 00:13:56.704 CC lib/ftl/ftl_io.o 00:13:56.704 CC lib/nbd/nbd_rpc.o 00:13:56.704 CC lib/ftl/ftl_debug.o 00:13:56.704 CC lib/nbd/nbd.o 00:13:56.704 CC lib/scsi/dev.o 00:13:56.704 CC lib/ublk/ublk.o 00:13:56.704 CC lib/nvmf/ctrlr.o 00:13:56.962 CC lib/nvmf/ctrlr_discovery.o 00:13:56.962 CC lib/nvmf/ctrlr_bdev.o 00:13:56.962 CC lib/scsi/lun.o 00:13:56.962 CC lib/scsi/port.o 00:13:56.962 CC lib/ublk/ublk_rpc.o 00:13:57.221 CC lib/ftl/ftl_sb.o 00:13:57.221 CC lib/ftl/ftl_l2p.o 00:13:57.221 CC lib/nvmf/subsystem.o 00:13:57.221 CC lib/nvmf/nvmf.o 00:13:57.221 LIB libspdk_nbd.a 00:13:57.479 CC lib/ftl/ftl_l2p_flat.o 00:13:57.479 SO libspdk_nbd.so.7.0 00:13:57.479 CC lib/scsi/scsi.o 00:13:57.479 CC lib/ftl/ftl_nv_cache.o 00:13:57.479 SYMLINK libspdk_nbd.so 00:13:57.479 CC lib/ftl/ftl_band.o 00:13:57.737 CC lib/scsi/scsi_bdev.o 00:13:57.737 CC lib/nvmf/nvmf_rpc.o 00:13:57.737 LIB libspdk_ublk.a 00:13:57.737 CC lib/nvmf/transport.o 00:13:57.737 SO libspdk_ublk.so.3.0 00:13:57.995 SYMLINK libspdk_ublk.so 00:13:57.995 CC lib/ftl/ftl_band_ops.o 00:13:57.995 CC lib/nvmf/tcp.o 00:13:58.253 CC lib/nvmf/stubs.o 00:13:58.253 CC lib/nvmf/mdns_server.o 00:13:58.253 CC lib/scsi/scsi_pr.o 00:13:58.511 CC lib/nvmf/rdma.o 00:13:58.770 CC lib/nvmf/auth.o 00:13:58.770 CC lib/ftl/ftl_writer.o 00:13:58.770 CC lib/scsi/scsi_rpc.o 00:13:58.770 CC lib/ftl/ftl_rq.o 00:13:58.770 CC lib/scsi/task.o 00:13:59.027 CC lib/ftl/ftl_reloc.o 00:13:59.027 CC lib/ftl/ftl_l2p_cache.o 00:13:59.027 CC lib/ftl/ftl_p2l.o 00:13:59.027 CC lib/ftl/ftl_p2l_log.o 00:13:59.027 CC lib/ftl/mngt/ftl_mngt.o 00:13:59.027 LIB libspdk_scsi.a 00:13:59.027 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:59.285 SO libspdk_scsi.so.9.0 00:13:59.285 SYMLINK libspdk_scsi.so 00:13:59.285 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:59.285 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:59.285 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:59.285 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:59.543 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:59.543 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:59.543 CC lib/vhost/vhost.o 00:13:59.543 CC lib/iscsi/conn.o 00:13:59.801 CC lib/iscsi/init_grp.o 00:13:59.801 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:59.801 CC lib/vhost/vhost_rpc.o 00:13:59.801 CC lib/vhost/vhost_scsi.o 00:13:59.801 CC lib/iscsi/iscsi.o 00:13:59.801 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:00.060 CC lib/iscsi/param.o 00:14:00.060 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:00.060 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:00.060 CC lib/iscsi/portal_grp.o 00:14:00.318 CC lib/vhost/vhost_blk.o 00:14:00.318 CC lib/vhost/rte_vhost_user.o 00:14:00.318 CC lib/iscsi/tgt_node.o 00:14:00.577 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:00.577 CC lib/iscsi/iscsi_subsystem.o 00:14:00.577 CC lib/iscsi/iscsi_rpc.o 00:14:00.577 CC lib/iscsi/task.o 00:14:00.577 CC lib/ftl/utils/ftl_conf.o 00:14:00.835 CC lib/ftl/utils/ftl_md.o 00:14:00.835 CC lib/ftl/utils/ftl_mempool.o 00:14:00.835 CC lib/ftl/utils/ftl_bitmap.o 00:14:00.835 CC lib/ftl/utils/ftl_property.o 00:14:01.093 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:01.093 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:01.093 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:01.093 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:01.093 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:01.354 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:01.354 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:14:01.354 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:01.354 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:01.354 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:01.354 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:01.354 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:14:01.354 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:14:01.354 LIB libspdk_vhost.a 00:14:01.613 CC lib/ftl/base/ftl_base_dev.o 00:14:01.613 LIB libspdk_iscsi.a 00:14:01.613 CC lib/ftl/base/ftl_base_bdev.o 00:14:01.613 CC lib/ftl/ftl_trace.o 00:14:01.613 SO libspdk_vhost.so.8.0 00:14:01.613 LIB libspdk_nvmf.a 00:14:01.613 SO libspdk_iscsi.so.8.0 00:14:01.613 SYMLINK libspdk_vhost.so 00:14:01.871 SO libspdk_nvmf.so.19.0 00:14:01.871 LIB libspdk_ftl.a 00:14:01.871 SYMLINK libspdk_iscsi.so 00:14:01.871 SYMLINK libspdk_nvmf.so 00:14:02.131 SO libspdk_ftl.so.9.0 00:14:02.389 SYMLINK libspdk_ftl.so 00:14:02.956 CC module/env_dpdk/env_dpdk_rpc.o 00:14:02.956 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:02.956 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:02.956 CC module/accel/ioat/accel_ioat.o 00:14:02.956 CC module/keyring/file/keyring.o 00:14:02.956 CC module/sock/posix/posix.o 00:14:02.956 CC module/accel/error/accel_error.o 00:14:02.956 CC module/accel/dsa/accel_dsa.o 00:14:02.956 CC module/blob/bdev/blob_bdev.o 00:14:02.956 CC module/fsdev/aio/fsdev_aio.o 00:14:02.956 LIB libspdk_env_dpdk_rpc.a 00:14:02.956 SO libspdk_env_dpdk_rpc.so.6.0 00:14:03.215 SYMLINK libspdk_env_dpdk_rpc.so 00:14:03.215 LIB libspdk_scheduler_dpdk_governor.a 00:14:03.215 CC module/accel/error/accel_error_rpc.o 00:14:03.215 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:03.216 CC module/accel/ioat/accel_ioat_rpc.o 00:14:03.216 LIB libspdk_scheduler_dynamic.a 00:14:03.216 SO libspdk_scheduler_dynamic.so.4.0 00:14:03.216 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:03.216 CC module/keyring/file/keyring_rpc.o 00:14:03.216 SYMLINK libspdk_scheduler_dynamic.so 00:14:03.216 LIB libspdk_accel_error.a 00:14:03.216 CC module/accel/dsa/accel_dsa_rpc.o 00:14:03.216 LIB libspdk_blob_bdev.a 00:14:03.216 LIB libspdk_accel_ioat.a 00:14:03.216 SO libspdk_accel_error.so.2.0 00:14:03.216 SO libspdk_blob_bdev.so.11.0 00:14:03.216 CC module/keyring/linux/keyring.o 00:14:03.474 SO libspdk_accel_ioat.so.6.0 00:14:03.474 LIB libspdk_keyring_file.a 00:14:03.474 SYMLINK libspdk_accel_error.so 00:14:03.474 CC module/keyring/linux/keyring_rpc.o 00:14:03.474 SYMLINK libspdk_blob_bdev.so 00:14:03.474 CC module/fsdev/aio/fsdev_aio_rpc.o 00:14:03.474 CC module/accel/iaa/accel_iaa.o 00:14:03.474 SO libspdk_keyring_file.so.2.0 00:14:03.474 SYMLINK libspdk_accel_ioat.so 00:14:03.475 CC module/fsdev/aio/linux_aio_mgr.o 00:14:03.475 LIB libspdk_accel_dsa.a 00:14:03.475 CC module/scheduler/gscheduler/gscheduler.o 00:14:03.475 SO libspdk_accel_dsa.so.5.0 00:14:03.475 CC module/accel/iaa/accel_iaa_rpc.o 00:14:03.475 SYMLINK libspdk_keyring_file.so 00:14:03.475 LIB libspdk_keyring_linux.a 00:14:03.475 SYMLINK libspdk_accel_dsa.so 00:14:03.475 SO libspdk_keyring_linux.so.1.0 00:14:03.733 LIB libspdk_scheduler_gscheduler.a 00:14:03.733 SO libspdk_scheduler_gscheduler.so.4.0 00:14:03.733 SYMLINK libspdk_keyring_linux.so 00:14:03.733 LIB libspdk_accel_iaa.a 00:14:03.733 SYMLINK libspdk_scheduler_gscheduler.so 00:14:03.733 SO libspdk_accel_iaa.so.3.0 00:14:03.733 LIB libspdk_fsdev_aio.a 00:14:03.733 SO libspdk_fsdev_aio.so.1.0 00:14:03.733 SYMLINK libspdk_accel_iaa.so 00:14:03.733 CC module/bdev/error/vbdev_error.o 00:14:03.733 CC module/bdev/delay/vbdev_delay.o 00:14:03.992 CC module/bdev/lvol/vbdev_lvol.o 00:14:03.992 CC module/bdev/gpt/gpt.o 00:14:03.992 CC module/blobfs/bdev/blobfs_bdev.o 00:14:03.992 LIB libspdk_sock_posix.a 00:14:03.992 CC module/bdev/malloc/bdev_malloc.o 00:14:03.992 SYMLINK libspdk_fsdev_aio.so 00:14:03.992 CC module/bdev/null/bdev_null.o 00:14:03.992 SO libspdk_sock_posix.so.6.0 00:14:03.992 CC module/bdev/nvme/bdev_nvme.o 00:14:03.992 SYMLINK libspdk_sock_posix.so 00:14:03.992 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:04.250 CC module/bdev/passthru/vbdev_passthru.o 00:14:04.250 CC module/bdev/gpt/vbdev_gpt.o 00:14:04.250 CC module/bdev/error/vbdev_error_rpc.o 00:14:04.250 CC module/bdev/raid/bdev_raid.o 00:14:04.250 LIB libspdk_blobfs_bdev.a 00:14:04.250 CC module/bdev/null/bdev_null_rpc.o 00:14:04.250 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:04.250 SO libspdk_blobfs_bdev.so.6.0 00:14:04.250 LIB libspdk_bdev_error.a 00:14:04.250 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:04.250 SO libspdk_bdev_error.so.6.0 00:14:04.508 SYMLINK libspdk_blobfs_bdev.so 00:14:04.508 CC module/bdev/raid/bdev_raid_rpc.o 00:14:04.508 LIB libspdk_bdev_gpt.a 00:14:04.508 SYMLINK libspdk_bdev_error.so 00:14:04.508 CC module/bdev/raid/bdev_raid_sb.o 00:14:04.508 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:04.508 LIB libspdk_bdev_null.a 00:14:04.508 SO libspdk_bdev_gpt.so.6.0 00:14:04.508 LIB libspdk_bdev_delay.a 00:14:04.508 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:04.508 SO libspdk_bdev_null.so.6.0 00:14:04.508 SO libspdk_bdev_delay.so.6.0 00:14:04.508 LIB libspdk_bdev_malloc.a 00:14:04.508 SYMLINK libspdk_bdev_gpt.so 00:14:04.508 CC module/bdev/raid/raid0.o 00:14:04.508 SO libspdk_bdev_malloc.so.6.0 00:14:04.508 SYMLINK libspdk_bdev_null.so 00:14:04.508 SYMLINK libspdk_bdev_delay.so 00:14:04.508 CC module/bdev/raid/raid1.o 00:14:04.508 CC module/bdev/raid/concat.o 00:14:04.508 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:04.768 LIB libspdk_bdev_passthru.a 00:14:04.768 SYMLINK libspdk_bdev_malloc.so 00:14:04.768 SO libspdk_bdev_passthru.so.6.0 00:14:04.768 SYMLINK libspdk_bdev_passthru.so 00:14:04.768 CC module/bdev/raid/raid5f.o 00:14:04.768 CC module/bdev/split/vbdev_split.o 00:14:04.768 CC module/bdev/split/vbdev_split_rpc.o 00:14:05.026 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:05.027 LIB libspdk_bdev_lvol.a 00:14:05.027 CC module/bdev/aio/bdev_aio.o 00:14:05.027 CC module/bdev/aio/bdev_aio_rpc.o 00:14:05.027 SO libspdk_bdev_lvol.so.6.0 00:14:05.027 LIB libspdk_bdev_split.a 00:14:05.027 CC module/bdev/ftl/bdev_ftl.o 00:14:05.285 SO libspdk_bdev_split.so.6.0 00:14:05.286 SYMLINK libspdk_bdev_lvol.so 00:14:05.286 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:05.286 SYMLINK libspdk_bdev_split.so 00:14:05.286 CC module/bdev/nvme/nvme_rpc.o 00:14:05.286 CC module/bdev/nvme/bdev_mdns_client.o 00:14:05.286 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:05.544 CC module/bdev/nvme/vbdev_opal.o 00:14:05.544 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:05.544 LIB libspdk_bdev_aio.a 00:14:05.544 LIB libspdk_bdev_ftl.a 00:14:05.544 LIB libspdk_bdev_raid.a 00:14:05.544 SO libspdk_bdev_aio.so.6.0 00:14:05.544 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:05.544 SO libspdk_bdev_ftl.so.6.0 00:14:05.544 LIB libspdk_bdev_zone_block.a 00:14:05.544 SO libspdk_bdev_zone_block.so.6.0 00:14:05.544 SO libspdk_bdev_raid.so.6.0 00:14:05.544 SYMLINK libspdk_bdev_aio.so 00:14:05.544 CC module/bdev/iscsi/bdev_iscsi.o 00:14:05.544 SYMLINK libspdk_bdev_zone_block.so 00:14:05.544 SYMLINK libspdk_bdev_ftl.so 00:14:05.544 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:05.803 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:05.803 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:05.803 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:05.803 SYMLINK libspdk_bdev_raid.so 00:14:06.061 LIB libspdk_bdev_iscsi.a 00:14:06.061 SO libspdk_bdev_iscsi.so.6.0 00:14:06.319 SYMLINK libspdk_bdev_iscsi.so 00:14:06.319 LIB libspdk_bdev_virtio.a 00:14:06.319 SO libspdk_bdev_virtio.so.6.0 00:14:06.578 SYMLINK libspdk_bdev_virtio.so 00:14:06.837 LIB libspdk_bdev_nvme.a 00:14:06.837 SO libspdk_bdev_nvme.so.7.0 00:14:07.096 SYMLINK libspdk_bdev_nvme.so 00:14:07.662 CC module/event/subsystems/vmd/vmd.o 00:14:07.662 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:07.662 CC module/event/subsystems/iobuf/iobuf.o 00:14:07.662 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:07.662 CC module/event/subsystems/fsdev/fsdev.o 00:14:07.662 CC module/event/subsystems/sock/sock.o 00:14:07.662 CC module/event/subsystems/scheduler/scheduler.o 00:14:07.662 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:07.662 CC module/event/subsystems/keyring/keyring.o 00:14:07.921 LIB libspdk_event_fsdev.a 00:14:07.921 LIB libspdk_event_vmd.a 00:14:07.921 SO libspdk_event_fsdev.so.1.0 00:14:07.921 LIB libspdk_event_scheduler.a 00:14:07.921 LIB libspdk_event_keyring.a 00:14:07.921 LIB libspdk_event_sock.a 00:14:07.921 SO libspdk_event_vmd.so.6.0 00:14:07.921 LIB libspdk_event_vhost_blk.a 00:14:07.921 LIB libspdk_event_iobuf.a 00:14:07.921 SO libspdk_event_sock.so.5.0 00:14:07.921 SO libspdk_event_keyring.so.1.0 00:14:07.921 SO libspdk_event_scheduler.so.4.0 00:14:07.921 SO libspdk_event_vhost_blk.so.3.0 00:14:07.921 SYMLINK libspdk_event_fsdev.so 00:14:07.921 SO libspdk_event_iobuf.so.3.0 00:14:07.921 SYMLINK libspdk_event_sock.so 00:14:07.921 SYMLINK libspdk_event_scheduler.so 00:14:07.921 SYMLINK libspdk_event_keyring.so 00:14:07.921 SYMLINK libspdk_event_vmd.so 00:14:07.921 SYMLINK libspdk_event_iobuf.so 00:14:07.921 SYMLINK libspdk_event_vhost_blk.so 00:14:08.179 CC module/event/subsystems/accel/accel.o 00:14:08.438 LIB libspdk_event_accel.a 00:14:08.438 SO libspdk_event_accel.so.6.0 00:14:08.696 SYMLINK libspdk_event_accel.so 00:14:08.955 CC module/event/subsystems/bdev/bdev.o 00:14:09.212 LIB libspdk_event_bdev.a 00:14:09.212 SO libspdk_event_bdev.so.6.0 00:14:09.212 SYMLINK libspdk_event_bdev.so 00:14:09.471 CC module/event/subsystems/scsi/scsi.o 00:14:09.471 CC module/event/subsystems/nbd/nbd.o 00:14:09.471 CC module/event/subsystems/ublk/ublk.o 00:14:09.471 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:09.471 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:09.730 LIB libspdk_event_scsi.a 00:14:09.730 LIB libspdk_event_ublk.a 00:14:09.730 LIB libspdk_event_nbd.a 00:14:09.730 SO libspdk_event_scsi.so.6.0 00:14:09.730 SO libspdk_event_ublk.so.3.0 00:14:09.730 SO libspdk_event_nbd.so.6.0 00:14:09.730 SYMLINK libspdk_event_scsi.so 00:14:09.730 SYMLINK libspdk_event_nbd.so 00:14:09.730 SYMLINK libspdk_event_ublk.so 00:14:09.989 LIB libspdk_event_nvmf.a 00:14:09.989 SO libspdk_event_nvmf.so.6.0 00:14:09.989 SYMLINK libspdk_event_nvmf.so 00:14:09.989 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:09.989 CC module/event/subsystems/iscsi/iscsi.o 00:14:10.248 LIB libspdk_event_iscsi.a 00:14:10.248 LIB libspdk_event_vhost_scsi.a 00:14:10.248 SO libspdk_event_iscsi.so.6.0 00:14:10.248 SO libspdk_event_vhost_scsi.so.3.0 00:14:10.506 SYMLINK libspdk_event_iscsi.so 00:14:10.506 SYMLINK libspdk_event_vhost_scsi.so 00:14:10.506 SO libspdk.so.6.0 00:14:10.506 SYMLINK libspdk.so 00:14:10.765 CC app/trace_record/trace_record.o 00:14:10.765 CC app/spdk_nvme_perf/perf.o 00:14:10.765 CXX app/trace/trace.o 00:14:10.765 CC app/spdk_lspci/spdk_lspci.o 00:14:11.025 CC app/nvmf_tgt/nvmf_main.o 00:14:11.025 CC app/iscsi_tgt/iscsi_tgt.o 00:14:11.025 CC app/spdk_tgt/spdk_tgt.o 00:14:11.025 CC examples/util/zipf/zipf.o 00:14:11.025 CC test/thread/poller_perf/poller_perf.o 00:14:11.025 CC test/dma/test_dma/test_dma.o 00:14:11.025 LINK spdk_lspci 00:14:11.025 LINK spdk_trace_record 00:14:11.285 LINK iscsi_tgt 00:14:11.285 LINK nvmf_tgt 00:14:11.285 LINK zipf 00:14:11.285 LINK poller_perf 00:14:11.285 LINK spdk_tgt 00:14:11.285 LINK spdk_trace 00:14:11.285 CC app/spdk_nvme_identify/identify.o 00:14:11.544 CC app/spdk_nvme_discover/discovery_aer.o 00:14:11.544 CC app/spdk_top/spdk_top.o 00:14:11.544 CC examples/ioat/perf/perf.o 00:14:11.544 CC examples/vmd/lsvmd/lsvmd.o 00:14:11.544 CC app/spdk_dd/spdk_dd.o 00:14:11.544 CC examples/ioat/verify/verify.o 00:14:11.544 LINK test_dma 00:14:11.802 LINK spdk_nvme_discover 00:14:11.802 LINK lsvmd 00:14:11.802 CC app/fio/nvme/fio_plugin.o 00:14:11.802 LINK ioat_perf 00:14:11.802 LINK verify 00:14:12.060 LINK spdk_nvme_perf 00:14:12.060 CC examples/vmd/led/led.o 00:14:12.060 CC test/app/bdev_svc/bdev_svc.o 00:14:12.060 CC test/app/histogram_perf/histogram_perf.o 00:14:12.060 LINK spdk_dd 00:14:12.060 CC test/blobfs/mkfs/mkfs.o 00:14:12.319 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:12.319 LINK led 00:14:12.319 CC test/app/jsoncat/jsoncat.o 00:14:12.319 LINK bdev_svc 00:14:12.319 LINK histogram_perf 00:14:12.319 LINK mkfs 00:14:12.578 LINK spdk_nvme 00:14:12.578 LINK jsoncat 00:14:12.578 CC test/app/stub/stub.o 00:14:12.578 LINK spdk_nvme_identify 00:14:12.578 LINK spdk_top 00:14:12.578 CC examples/idxd/perf/perf.o 00:14:12.836 TEST_HEADER include/spdk/accel.h 00:14:12.836 TEST_HEADER include/spdk/accel_module.h 00:14:12.836 TEST_HEADER include/spdk/assert.h 00:14:12.836 TEST_HEADER include/spdk/barrier.h 00:14:12.836 TEST_HEADER include/spdk/base64.h 00:14:12.836 TEST_HEADER include/spdk/bdev.h 00:14:12.836 TEST_HEADER include/spdk/bdev_module.h 00:14:12.836 TEST_HEADER include/spdk/bdev_zone.h 00:14:12.836 TEST_HEADER include/spdk/bit_array.h 00:14:12.836 CC app/vhost/vhost.o 00:14:12.836 TEST_HEADER include/spdk/bit_pool.h 00:14:12.836 TEST_HEADER include/spdk/blob_bdev.h 00:14:12.836 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:12.836 CC app/fio/bdev/fio_plugin.o 00:14:12.836 TEST_HEADER include/spdk/blobfs.h 00:14:12.836 TEST_HEADER include/spdk/blob.h 00:14:12.836 TEST_HEADER include/spdk/conf.h 00:14:12.836 LINK nvme_fuzz 00:14:12.836 LINK stub 00:14:12.836 TEST_HEADER include/spdk/config.h 00:14:12.836 TEST_HEADER include/spdk/cpuset.h 00:14:12.836 TEST_HEADER include/spdk/crc16.h 00:14:12.836 TEST_HEADER include/spdk/crc32.h 00:14:12.836 TEST_HEADER include/spdk/crc64.h 00:14:12.836 TEST_HEADER include/spdk/dif.h 00:14:12.836 TEST_HEADER include/spdk/dma.h 00:14:12.836 TEST_HEADER include/spdk/endian.h 00:14:12.836 TEST_HEADER include/spdk/env_dpdk.h 00:14:12.836 TEST_HEADER include/spdk/env.h 00:14:12.836 TEST_HEADER include/spdk/event.h 00:14:12.836 TEST_HEADER include/spdk/fd_group.h 00:14:12.836 TEST_HEADER include/spdk/fd.h 00:14:12.836 TEST_HEADER include/spdk/file.h 00:14:12.836 TEST_HEADER include/spdk/fsdev.h 00:14:12.836 TEST_HEADER include/spdk/fsdev_module.h 00:14:12.836 TEST_HEADER include/spdk/ftl.h 00:14:12.836 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:12.836 TEST_HEADER include/spdk/fuse_dispatcher.h 00:14:12.836 TEST_HEADER include/spdk/gpt_spec.h 00:14:12.836 TEST_HEADER include/spdk/hexlify.h 00:14:12.836 TEST_HEADER include/spdk/histogram_data.h 00:14:12.836 TEST_HEADER include/spdk/idxd.h 00:14:12.836 TEST_HEADER include/spdk/idxd_spec.h 00:14:12.836 TEST_HEADER include/spdk/init.h 00:14:12.836 TEST_HEADER include/spdk/ioat.h 00:14:12.836 TEST_HEADER include/spdk/ioat_spec.h 00:14:12.836 TEST_HEADER include/spdk/iscsi_spec.h 00:14:12.836 TEST_HEADER include/spdk/json.h 00:14:12.836 TEST_HEADER include/spdk/jsonrpc.h 00:14:12.836 TEST_HEADER include/spdk/keyring.h 00:14:12.836 TEST_HEADER include/spdk/keyring_module.h 00:14:12.836 TEST_HEADER include/spdk/likely.h 00:14:12.836 TEST_HEADER include/spdk/log.h 00:14:12.836 TEST_HEADER include/spdk/lvol.h 00:14:12.836 TEST_HEADER include/spdk/md5.h 00:14:12.836 TEST_HEADER include/spdk/memory.h 00:14:12.836 TEST_HEADER include/spdk/mmio.h 00:14:12.836 TEST_HEADER include/spdk/nbd.h 00:14:12.836 TEST_HEADER include/spdk/net.h 00:14:12.836 TEST_HEADER include/spdk/notify.h 00:14:12.836 TEST_HEADER include/spdk/nvme.h 00:14:12.836 TEST_HEADER include/spdk/nvme_intel.h 00:14:12.836 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:12.836 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:12.836 TEST_HEADER include/spdk/nvme_spec.h 00:14:12.836 TEST_HEADER include/spdk/nvme_zns.h 00:14:12.836 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:12.836 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:12.836 TEST_HEADER include/spdk/nvmf.h 00:14:12.836 TEST_HEADER include/spdk/nvmf_spec.h 00:14:12.836 TEST_HEADER include/spdk/nvmf_transport.h 00:14:12.836 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:12.836 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:12.836 TEST_HEADER include/spdk/opal.h 00:14:12.837 TEST_HEADER include/spdk/opal_spec.h 00:14:12.837 TEST_HEADER include/spdk/pci_ids.h 00:14:12.837 TEST_HEADER include/spdk/pipe.h 00:14:12.837 TEST_HEADER include/spdk/queue.h 00:14:12.837 TEST_HEADER include/spdk/reduce.h 00:14:12.837 TEST_HEADER include/spdk/rpc.h 00:14:12.837 TEST_HEADER include/spdk/scheduler.h 00:14:12.837 TEST_HEADER include/spdk/scsi.h 00:14:12.837 TEST_HEADER include/spdk/scsi_spec.h 00:14:12.837 TEST_HEADER include/spdk/sock.h 00:14:12.837 TEST_HEADER include/spdk/stdinc.h 00:14:12.837 TEST_HEADER include/spdk/string.h 00:14:12.837 TEST_HEADER include/spdk/thread.h 00:14:12.837 TEST_HEADER include/spdk/trace.h 00:14:12.837 TEST_HEADER include/spdk/trace_parser.h 00:14:12.837 TEST_HEADER include/spdk/tree.h 00:14:13.095 TEST_HEADER include/spdk/ublk.h 00:14:13.095 TEST_HEADER include/spdk/util.h 00:14:13.095 TEST_HEADER include/spdk/uuid.h 00:14:13.095 TEST_HEADER include/spdk/version.h 00:14:13.095 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:13.095 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:13.095 TEST_HEADER include/spdk/vhost.h 00:14:13.095 TEST_HEADER include/spdk/vmd.h 00:14:13.095 TEST_HEADER include/spdk/xor.h 00:14:13.095 TEST_HEADER include/spdk/zipf.h 00:14:13.095 CXX test/cpp_headers/accel.o 00:14:13.095 CXX test/cpp_headers/accel_module.o 00:14:13.095 LINK vhost 00:14:13.095 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:13.095 LINK idxd_perf 00:14:13.095 LINK interrupt_tgt 00:14:13.354 CC test/env/mem_callbacks/mem_callbacks.o 00:14:13.354 CC test/event/event_perf/event_perf.o 00:14:13.354 CXX test/cpp_headers/assert.o 00:14:13.354 CC test/env/vtophys/vtophys.o 00:14:13.354 CXX test/cpp_headers/barrier.o 00:14:13.354 CXX test/cpp_headers/base64.o 00:14:13.354 LINK spdk_bdev 00:14:13.354 LINK event_perf 00:14:13.611 LINK vtophys 00:14:13.611 CXX test/cpp_headers/bdev.o 00:14:13.611 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:13.612 CC test/env/memory/memory_ut.o 00:14:13.869 LINK vhost_fuzz 00:14:13.869 CC examples/thread/thread/thread_ex.o 00:14:13.869 CC test/event/reactor/reactor.o 00:14:13.869 CXX test/cpp_headers/bdev_module.o 00:14:13.869 CC test/event/reactor_perf/reactor_perf.o 00:14:13.869 LINK mem_callbacks 00:14:13.869 CXX test/cpp_headers/bdev_zone.o 00:14:13.869 LINK env_dpdk_post_init 00:14:14.126 LINK reactor 00:14:14.126 CC test/lvol/esnap/esnap.o 00:14:14.126 LINK reactor_perf 00:14:14.126 CXX test/cpp_headers/bit_array.o 00:14:14.126 LINK thread 00:14:14.126 CXX test/cpp_headers/bit_pool.o 00:14:14.126 CXX test/cpp_headers/blob_bdev.o 00:14:14.384 CC examples/sock/hello_world/hello_sock.o 00:14:14.385 CC test/event/app_repeat/app_repeat.o 00:14:14.385 CXX test/cpp_headers/blobfs_bdev.o 00:14:14.385 CC test/nvme/aer/aer.o 00:14:14.385 CC test/event/scheduler/scheduler.o 00:14:14.643 CC test/env/pci/pci_ut.o 00:14:14.643 LINK app_repeat 00:14:14.643 CC examples/accel/perf/accel_perf.o 00:14:14.643 CXX test/cpp_headers/blobfs.o 00:14:14.903 LINK hello_sock 00:14:14.903 LINK scheduler 00:14:14.903 CXX test/cpp_headers/blob.o 00:14:14.903 LINK aer 00:14:15.161 CC test/nvme/reset/reset.o 00:14:15.161 CXX test/cpp_headers/conf.o 00:14:15.161 LINK pci_ut 00:14:15.161 CXX test/cpp_headers/config.o 00:14:15.422 LINK memory_ut 00:14:15.422 CXX test/cpp_headers/cpuset.o 00:14:15.422 CC test/rpc_client/rpc_client_test.o 00:14:15.422 LINK iscsi_fuzz 00:14:15.422 LINK accel_perf 00:14:15.422 CC examples/blob/hello_world/hello_blob.o 00:14:15.422 CXX test/cpp_headers/crc16.o 00:14:15.681 CC test/accel/dif/dif.o 00:14:15.681 LINK reset 00:14:15.681 LINK rpc_client_test 00:14:15.681 CC examples/blob/cli/blobcli.o 00:14:15.681 LINK hello_blob 00:14:15.938 CXX test/cpp_headers/crc32.o 00:14:15.938 CC examples/nvme/hello_world/hello_world.o 00:14:15.938 CC test/nvme/sgl/sgl.o 00:14:15.938 CXX test/cpp_headers/crc64.o 00:14:15.938 CC examples/fsdev/hello_world/hello_fsdev.o 00:14:15.938 CXX test/cpp_headers/dif.o 00:14:16.197 LINK hello_world 00:14:16.197 CC examples/bdev/hello_world/hello_bdev.o 00:14:16.197 CC examples/nvme/reconnect/reconnect.o 00:14:16.197 CC examples/bdev/bdevperf/bdevperf.o 00:14:16.197 CXX test/cpp_headers/dma.o 00:14:16.455 LINK sgl 00:14:16.455 LINK blobcli 00:14:16.455 LINK hello_fsdev 00:14:16.455 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:16.455 LINK hello_bdev 00:14:16.741 CXX test/cpp_headers/endian.o 00:14:16.741 LINK dif 00:14:16.741 CC test/nvme/e2edp/nvme_dp.o 00:14:16.741 LINK reconnect 00:14:16.741 CXX test/cpp_headers/env_dpdk.o 00:14:16.741 CC test/nvme/overhead/overhead.o 00:14:16.741 CC test/nvme/err_injection/err_injection.o 00:14:16.999 CXX test/cpp_headers/env.o 00:14:16.999 CC test/nvme/startup/startup.o 00:14:16.999 CXX test/cpp_headers/event.o 00:14:16.999 LINK err_injection 00:14:16.999 LINK nvme_dp 00:14:16.999 CC examples/nvme/arbitration/arbitration.o 00:14:16.999 LINK overhead 00:14:17.257 LINK nvme_manage 00:14:17.257 LINK startup 00:14:17.257 LINK bdevperf 00:14:17.257 CXX test/cpp_headers/fd_group.o 00:14:17.257 CXX test/cpp_headers/fd.o 00:14:17.257 CXX test/cpp_headers/file.o 00:14:17.257 CC test/bdev/bdevio/bdevio.o 00:14:17.257 CXX test/cpp_headers/fsdev.o 00:14:17.257 CC test/nvme/reserve/reserve.o 00:14:17.515 CC test/nvme/simple_copy/simple_copy.o 00:14:17.515 CXX test/cpp_headers/fsdev_module.o 00:14:17.515 CXX test/cpp_headers/ftl.o 00:14:17.515 CC test/nvme/connect_stress/connect_stress.o 00:14:17.515 CC test/nvme/boot_partition/boot_partition.o 00:14:17.515 CC test/nvme/compliance/nvme_compliance.o 00:14:17.772 LINK arbitration 00:14:17.772 LINK reserve 00:14:17.772 LINK simple_copy 00:14:17.772 CXX test/cpp_headers/fuse_dispatcher.o 00:14:17.772 LINK connect_stress 00:14:17.772 LINK bdevio 00:14:17.772 CC test/nvme/fused_ordering/fused_ordering.o 00:14:17.772 LINK boot_partition 00:14:18.030 CXX test/cpp_headers/gpt_spec.o 00:14:18.030 CC examples/nvme/hotplug/hotplug.o 00:14:18.030 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:18.030 CXX test/cpp_headers/hexlify.o 00:14:18.030 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:18.030 LINK nvme_compliance 00:14:18.030 LINK fused_ordering 00:14:18.030 CC test/nvme/fdp/fdp.o 00:14:18.289 CC test/nvme/cuse/cuse.o 00:14:18.289 LINK cmb_copy 00:14:18.289 CC examples/nvme/abort/abort.o 00:14:18.289 CXX test/cpp_headers/histogram_data.o 00:14:18.289 CXX test/cpp_headers/idxd.o 00:14:18.289 LINK doorbell_aers 00:14:18.289 CXX test/cpp_headers/idxd_spec.o 00:14:18.289 LINK hotplug 00:14:18.548 CXX test/cpp_headers/init.o 00:14:18.548 CXX test/cpp_headers/ioat.o 00:14:18.548 CXX test/cpp_headers/ioat_spec.o 00:14:18.548 LINK fdp 00:14:18.548 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:18.548 CXX test/cpp_headers/iscsi_spec.o 00:14:18.548 CXX test/cpp_headers/json.o 00:14:18.806 CXX test/cpp_headers/jsonrpc.o 00:14:18.806 CXX test/cpp_headers/keyring.o 00:14:18.806 CXX test/cpp_headers/keyring_module.o 00:14:18.806 LINK abort 00:14:18.806 CXX test/cpp_headers/likely.o 00:14:18.806 CXX test/cpp_headers/log.o 00:14:18.806 CXX test/cpp_headers/lvol.o 00:14:18.806 LINK pmr_persistence 00:14:18.806 CXX test/cpp_headers/md5.o 00:14:18.806 CXX test/cpp_headers/memory.o 00:14:18.806 CXX test/cpp_headers/mmio.o 00:14:19.062 CXX test/cpp_headers/nbd.o 00:14:19.062 CXX test/cpp_headers/net.o 00:14:19.062 CXX test/cpp_headers/notify.o 00:14:19.062 CXX test/cpp_headers/nvme.o 00:14:19.062 CXX test/cpp_headers/nvme_intel.o 00:14:19.062 CXX test/cpp_headers/nvme_ocssd.o 00:14:19.062 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:19.062 CXX test/cpp_headers/nvme_spec.o 00:14:19.062 CXX test/cpp_headers/nvme_zns.o 00:14:19.320 CXX test/cpp_headers/nvmf_cmd.o 00:14:19.320 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:19.320 CC examples/nvmf/nvmf/nvmf.o 00:14:19.320 CXX test/cpp_headers/nvmf.o 00:14:19.320 CXX test/cpp_headers/nvmf_spec.o 00:14:19.320 CXX test/cpp_headers/nvmf_transport.o 00:14:19.320 CXX test/cpp_headers/opal.o 00:14:19.320 CXX test/cpp_headers/opal_spec.o 00:14:19.320 CXX test/cpp_headers/pci_ids.o 00:14:19.320 CXX test/cpp_headers/pipe.o 00:14:19.577 CXX test/cpp_headers/queue.o 00:14:19.577 CXX test/cpp_headers/reduce.o 00:14:19.577 CXX test/cpp_headers/rpc.o 00:14:19.577 CXX test/cpp_headers/scheduler.o 00:14:19.577 LINK nvmf 00:14:19.577 CXX test/cpp_headers/scsi.o 00:14:19.577 CXX test/cpp_headers/scsi_spec.o 00:14:19.577 CXX test/cpp_headers/sock.o 00:14:19.577 CXX test/cpp_headers/stdinc.o 00:14:19.577 CXX test/cpp_headers/string.o 00:14:19.836 CXX test/cpp_headers/thread.o 00:14:19.836 CXX test/cpp_headers/trace.o 00:14:19.836 CXX test/cpp_headers/trace_parser.o 00:14:19.836 LINK cuse 00:14:19.836 CXX test/cpp_headers/tree.o 00:14:19.836 CXX test/cpp_headers/ublk.o 00:14:19.836 CXX test/cpp_headers/util.o 00:14:19.836 CXX test/cpp_headers/uuid.o 00:14:19.836 CXX test/cpp_headers/version.o 00:14:19.836 CXX test/cpp_headers/vfio_user_pci.o 00:14:19.836 CXX test/cpp_headers/vfio_user_spec.o 00:14:19.836 CXX test/cpp_headers/vhost.o 00:14:19.836 CXX test/cpp_headers/vmd.o 00:14:20.133 CXX test/cpp_headers/xor.o 00:14:20.133 CXX test/cpp_headers/zipf.o 00:14:21.529 LINK esnap 00:14:22.095 00:14:22.095 real 1m31.297s 00:14:22.095 user 8m31.317s 00:14:22.095 sys 1m59.755s 00:14:22.095 07:35:21 make -- common/autotest_common.sh@1129 -- $ xtrace_disable 00:14:22.095 07:35:21 make -- common/autotest_common.sh@10 -- $ set +x 00:14:22.095 ************************************ 00:14:22.095 END TEST make 00:14:22.095 ************************************ 00:14:22.095 07:35:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:22.095 07:35:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:22.095 07:35:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:22.095 07:35:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:22.095 07:35:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:22.095 07:35:21 -- pm/common@44 -- $ pid=5297 00:14:22.095 07:35:21 -- pm/common@50 -- $ kill -TERM 5297 00:14:22.095 07:35:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:22.095 07:35:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:22.095 07:35:21 -- pm/common@44 -- $ pid=5299 00:14:22.095 07:35:21 -- pm/common@50 -- $ kill -TERM 5299 00:14:22.095 07:35:21 -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:14:22.095 07:35:21 -- common/autotest_common.sh@1626 -- # lcov --version 00:14:22.095 07:35:21 -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:14:22.354 07:35:21 -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:14:22.354 07:35:21 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.354 07:35:21 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.354 07:35:21 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.354 07:35:21 -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.354 07:35:21 -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.354 07:35:21 -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.354 07:35:21 -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.354 07:35:21 -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.354 07:35:21 -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.354 07:35:21 -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.354 07:35:21 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.354 07:35:21 -- scripts/common.sh@344 -- # case "$op" in 00:14:22.354 07:35:21 -- scripts/common.sh@345 -- # : 1 00:14:22.354 07:35:21 -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.354 07:35:21 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.354 07:35:21 -- scripts/common.sh@365 -- # decimal 1 00:14:22.354 07:35:21 -- scripts/common.sh@353 -- # local d=1 00:14:22.354 07:35:21 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.354 07:35:21 -- scripts/common.sh@355 -- # echo 1 00:14:22.354 07:35:21 -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.354 07:35:21 -- scripts/common.sh@366 -- # decimal 2 00:14:22.354 07:35:21 -- scripts/common.sh@353 -- # local d=2 00:14:22.354 07:35:21 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.354 07:35:21 -- scripts/common.sh@355 -- # echo 2 00:14:22.354 07:35:21 -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.354 07:35:21 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.354 07:35:21 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.354 07:35:21 -- scripts/common.sh@368 -- # return 0 00:14:22.354 07:35:21 -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.354 07:35:21 -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:14:22.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.354 --rc genhtml_branch_coverage=1 00:14:22.354 --rc genhtml_function_coverage=1 00:14:22.354 --rc genhtml_legend=1 00:14:22.354 --rc geninfo_all_blocks=1 00:14:22.354 --rc geninfo_unexecuted_blocks=1 00:14:22.354 00:14:22.354 ' 00:14:22.354 07:35:21 -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:14:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.355 --rc genhtml_branch_coverage=1 00:14:22.355 --rc genhtml_function_coverage=1 00:14:22.355 --rc genhtml_legend=1 00:14:22.355 --rc geninfo_all_blocks=1 00:14:22.355 --rc geninfo_unexecuted_blocks=1 00:14:22.355 00:14:22.355 ' 00:14:22.355 07:35:21 -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:14:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.355 --rc genhtml_branch_coverage=1 00:14:22.355 --rc genhtml_function_coverage=1 00:14:22.355 --rc genhtml_legend=1 00:14:22.355 --rc geninfo_all_blocks=1 00:14:22.355 --rc geninfo_unexecuted_blocks=1 00:14:22.355 00:14:22.355 ' 00:14:22.355 07:35:21 -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:14:22.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.355 --rc genhtml_branch_coverage=1 00:14:22.355 --rc genhtml_function_coverage=1 00:14:22.355 --rc genhtml_legend=1 00:14:22.355 --rc geninfo_all_blocks=1 00:14:22.355 --rc geninfo_unexecuted_blocks=1 00:14:22.355 00:14:22.355 ' 00:14:22.355 07:35:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.355 07:35:21 -- nvmf/common.sh@7 -- # uname -s 00:14:22.355 07:35:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.355 07:35:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.355 07:35:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.355 07:35:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.355 07:35:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.355 07:35:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.355 07:35:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.355 07:35:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.355 07:35:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.355 07:35:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.355 07:35:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4d21d1-c360-43bc-be59-da89d43eb54f 00:14:22.355 07:35:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4d21d1-c360-43bc-be59-da89d43eb54f 00:14:22.355 07:35:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.355 07:35:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.355 07:35:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:22.355 07:35:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.355 07:35:21 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.355 07:35:21 -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.355 07:35:21 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.355 07:35:21 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.355 07:35:21 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.355 07:35:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.355 07:35:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.355 07:35:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.355 07:35:21 -- paths/export.sh@5 -- # export PATH 00:14:22.355 07:35:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.355 07:35:21 -- nvmf/common.sh@51 -- # : 0 00:14:22.355 07:35:21 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.355 07:35:21 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.355 07:35:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.355 07:35:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.355 07:35:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.355 07:35:21 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.355 07:35:21 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.355 07:35:21 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.355 07:35:21 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.355 07:35:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:22.355 07:35:21 -- spdk/autotest.sh@32 -- # uname -s 00:14:22.355 07:35:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:22.355 07:35:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:22.355 07:35:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:22.355 07:35:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:22.355 07:35:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:22.355 07:35:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:22.355 07:35:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:22.355 07:35:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:22.355 07:35:21 -- spdk/autotest.sh@48 -- # udevadm_pid=54320 00:14:22.355 07:35:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:22.355 07:35:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:22.355 07:35:21 -- pm/common@17 -- # local monitor 00:14:22.355 07:35:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:22.355 07:35:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:22.355 07:35:21 -- pm/common@25 -- # sleep 1 00:14:22.355 07:35:21 -- pm/common@21 -- # date +%s 00:14:22.355 07:35:21 -- pm/common@21 -- # date +%s 00:14:22.355 07:35:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728286521 00:14:22.355 07:35:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728286521 00:14:22.355 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728286521_collect-cpu-load.pm.log 00:14:22.355 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728286521_collect-vmstat.pm.log 00:14:23.290 07:35:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:23.290 07:35:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:23.290 07:35:22 -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:23.290 07:35:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.290 07:35:22 -- spdk/autotest.sh@59 -- # create_test_list 00:14:23.290 07:35:22 -- common/autotest_common.sh@751 -- # xtrace_disable 00:14:23.290 07:35:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.290 07:35:22 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:23.290 07:35:22 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:23.548 07:35:22 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:23.548 07:35:22 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:23.548 07:35:22 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:23.548 07:35:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:23.548 07:35:22 -- common/autotest_common.sh@1443 -- # uname 00:14:23.548 07:35:22 -- common/autotest_common.sh@1443 -- # '[' Linux = FreeBSD ']' 00:14:23.548 07:35:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:23.548 07:35:22 -- common/autotest_common.sh@1463 -- # uname 00:14:23.548 07:35:22 -- common/autotest_common.sh@1463 -- # [[ Linux = FreeBSD ]] 00:14:23.548 07:35:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:14:23.548 07:35:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:14:23.548 lcov: LCOV version 1.15 00:14:23.548 07:35:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:41.623 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:41.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:15:03.572 07:35:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:15:03.572 07:35:59 -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:03.572 07:35:59 -- common/autotest_common.sh@10 -- # set +x 00:15:03.572 07:35:59 -- spdk/autotest.sh@78 -- # rm -f 00:15:03.572 07:35:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:03.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.573 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:03.573 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:03.573 07:36:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:15:03.573 07:36:00 -- common/autotest_common.sh@1600 -- # zoned_devs=() 00:15:03.573 07:36:00 -- common/autotest_common.sh@1600 -- # local -gA zoned_devs 00:15:03.573 07:36:00 -- common/autotest_common.sh@1601 -- # local nvme bdf 00:15:03.573 07:36:00 -- common/autotest_common.sh@1603 -- # for nvme in /sys/block/nvme* 00:15:03.573 07:36:00 -- common/autotest_common.sh@1604 -- # is_block_zoned nvme0n1 00:15:03.573 07:36:00 -- common/autotest_common.sh@1593 -- # local device=nvme0n1 00:15:03.573 07:36:00 -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1603 -- # for nvme in /sys/block/nvme* 00:15:03.573 07:36:00 -- common/autotest_common.sh@1604 -- # is_block_zoned nvme1n1 00:15:03.573 07:36:00 -- common/autotest_common.sh@1593 -- # local device=nvme1n1 00:15:03.573 07:36:00 -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1603 -- # for nvme in /sys/block/nvme* 00:15:03.573 07:36:00 -- common/autotest_common.sh@1604 -- # is_block_zoned nvme1n2 00:15:03.573 07:36:00 -- common/autotest_common.sh@1593 -- # local device=nvme1n2 00:15:03.573 07:36:00 -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1603 -- # for nvme in /sys/block/nvme* 00:15:03.573 07:36:00 -- common/autotest_common.sh@1604 -- # is_block_zoned nvme1n3 00:15:03.573 07:36:00 -- common/autotest_common.sh@1593 -- # local device=nvme1n3 00:15:03.573 07:36:00 -- common/autotest_common.sh@1595 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:03.573 07:36:00 -- common/autotest_common.sh@1596 -- # [[ none != none ]] 00:15:03.573 07:36:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:15:03.573 07:36:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:03.573 07:36:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:03.573 07:36:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:15:03.573 07:36:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:03.573 07:36:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:03.573 No valid GPT data, bailing 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # pt= 00:15:03.573 07:36:00 -- scripts/common.sh@395 -- # return 1 00:15:03.573 07:36:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:03.573 1+0 records in 00:15:03.573 1+0 records out 00:15:03.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429047 s, 244 MB/s 00:15:03.573 07:36:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:03.573 07:36:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:03.573 07:36:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:15:03.573 07:36:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:15:03.573 07:36:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:15:03.573 No valid GPT data, bailing 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # pt= 00:15:03.573 07:36:00 -- scripts/common.sh@395 -- # return 1 00:15:03.573 07:36:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:15:03.573 1+0 records in 00:15:03.573 1+0 records out 00:15:03.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425781 s, 246 MB/s 00:15:03.573 07:36:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:03.573 07:36:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:03.573 07:36:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:15:03.573 07:36:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:15:03.573 07:36:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:15:03.573 No valid GPT data, bailing 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # pt= 00:15:03.573 07:36:00 -- scripts/common.sh@395 -- # return 1 00:15:03.573 07:36:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:15:03.573 1+0 records in 00:15:03.573 1+0 records out 00:15:03.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391923 s, 268 MB/s 00:15:03.573 07:36:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:03.573 07:36:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:03.573 07:36:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:15:03.573 07:36:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:15:03.573 07:36:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:15:03.573 No valid GPT data, bailing 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:15:03.573 07:36:00 -- scripts/common.sh@394 -- # pt= 00:15:03.573 07:36:00 -- scripts/common.sh@395 -- # return 1 00:15:03.573 07:36:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:15:03.573 1+0 records in 00:15:03.573 1+0 records out 00:15:03.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386999 s, 271 MB/s 00:15:03.573 07:36:00 -- spdk/autotest.sh@105 -- # sync 00:15:03.573 07:36:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:03.573 07:36:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:03.573 07:36:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:03.832 07:36:03 -- spdk/autotest.sh@111 -- # uname -s 00:15:03.832 07:36:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:15:03.832 07:36:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:15:03.832 07:36:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:04.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:04.771 Hugepages 00:15:04.771 node hugesize free / total 00:15:04.771 node0 1048576kB 0 / 0 00:15:04.771 node0 2048kB 0 / 0 00:15:04.771 00:15:04.771 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:04.771 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:04.771 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:04.771 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:15:04.771 07:36:04 -- spdk/autotest.sh@117 -- # uname -s 00:15:04.771 07:36:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:15:04.771 07:36:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:15:04.771 07:36:04 -- nvme/functions.sh@217 -- # scan_nvme_ctrls 00:15:04.771 07:36:04 -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:15:04.771 07:36:04 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:15:04.771 07:36:04 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@51 -- # pci=0000:00:10.0 00:15:04.771 07:36:04 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:10.0 00:15:04.771 07:36:04 -- scripts/common.sh@18 -- # local i 00:15:04.771 07:36:04 -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:04.771 07:36:04 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:04.771 07:36:04 -- scripts/common.sh@27 -- # return 0 00:15:04.771 07:36:04 -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:15:04.771 07:36:04 -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:04.771 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:04.771 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme0 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x1b36"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[vid]=0x1b36 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x1af4 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 12340 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="12340 "' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[sn]='12340 ' 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fr]='8.0.0 ' 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="6"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rab]=6 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="525400"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ieee]=525400 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.771 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="7"' 00:15:04.771 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mdts]=7 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.771 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[cntlid]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x100"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[oaes]=0x100 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x8000 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[nvmsr]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="0"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mec]=0 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x12a"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[oacs]=0x12a 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="3"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[acl]=3 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x3"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[frmw]=0x3 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0x7"' 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # nvme0[lpa]=0x7 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:04.772 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:04.772 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:04.772 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[elpe]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[npss]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[avscc]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="343"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[wctemp]=343 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="373"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[cctemp]=373 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.037 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:15:05.037 07:36:04 -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.037 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[edstt]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[dsto]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[sanicap]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[maxcmd]=0 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="256"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[nn]=256 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x15d"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[oncs]=0x15d 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.038 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.038 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:15:05.038 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fna]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x7"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[vwc]=0x7 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[awun]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[awupf]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[icsvscc]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0x3"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ocfs]=0x3 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0x1"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[sgls]=0x1 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[oaqd]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[oaqd]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:05.039 07:36:04 -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.039 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.039 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="-"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0[active_power_workload]=- 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:15:05.040 07:36:04 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:05.040 07:36:04 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:15:05.040 07:36:04 -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:05.040 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:05.040 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme0n1 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0x140000 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0x140000 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x140000 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x14 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=7 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0x4 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0x3"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[mc]=0x3 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0x1f 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=1 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nawun]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:05.040 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=0 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.040 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.040 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[npwg]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[npwa]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[npdg]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[npda]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nows]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="128"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=128 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="128"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[mcl]=128 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="127"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[msrc]=127 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:05.041 07:36:04 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:15:05.041 07:36:04 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:15:05.041 07:36:04 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:10.0 00:15:05.041 07:36:04 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:15:05.041 07:36:04 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:15:05.041 07:36:04 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@51 -- # pci=0000:00:11.0 00:15:05.041 07:36:04 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:11.0 00:15:05.041 07:36:04 -- scripts/common.sh@18 -- # local i 00:15:05.041 07:36:04 -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:05.041 07:36:04 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:05.041 07:36:04 -- scripts/common.sh@27 -- # return 0 00:15:05.041 07:36:04 -- nvme/functions.sh@53 -- # ctrl_dev=nvme1 00:15:05.041 07:36:04 -- nvme/functions.sh@54 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:05.041 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme1 reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:05.041 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme1=()' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme1 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[vid]="0x1b36"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme1[vid]=0x1b36 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ssvid]=0x1af4 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.041 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 12341 ]] 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[sn]="12341 "' 00:15:05.041 07:36:04 -- nvme/functions.sh@25 -- # nvme1[sn]='12341 ' 00:15:05.041 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fr]='8.0.0 ' 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rab]="6"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rab]=6 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ieee]="525400"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ieee]=525400 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[cmic]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[cmic]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mdts]="7"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mdts]=7 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[cntlid]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[cntlid]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ver]="0x10400"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ver]=0x10400 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3r]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rtd3r]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3e]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rtd3e]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[oaes]="0x100"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[oaes]=0x100 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ctratt]=0x8000 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rrls]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rrls]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[cntrltype]="1"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[cntrltype]=1 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt1]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[crdt1]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt2]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[crdt2]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt3]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[crdt3]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[nvmsr]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[nvmsr]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[vwci]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[vwci]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mec]="0"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mec]=0 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[oacs]="0x12a"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[oacs]=0x12a 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[acl]="3"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[acl]=3 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[aerl]="3"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[aerl]=3 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[frmw]="0x3"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[frmw]=0x3 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.042 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[lpa]="0x7"' 00:15:05.042 07:36:04 -- nvme/functions.sh@25 -- # nvme1[lpa]=0x7 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.042 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[elpe]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[elpe]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[npss]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[npss]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[avscc]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[avscc]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[apsta]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[apsta]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[wctemp]="343"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[wctemp]=343 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[cctemp]="373"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[cctemp]=373 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mtfa]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mtfa]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[hmpre]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[hmpre]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmin]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[hmmin]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[tnvmcap]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[tnvmcap]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[unvmcap]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[unvmcap]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rpmbs]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rpmbs]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[edstt]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[edstt]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[dsto]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[dsto]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fwug]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fwug]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[kas]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[kas]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[hctma]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[hctma]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mntmt]="0"' 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mntmt]=0 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.043 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.043 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.043 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mxtmt]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mxtmt]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[sanicap]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[sanicap]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[hmminds]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[hmminds]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmaxd]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[hmmaxd]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[nsetidmax]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[nsetidmax]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[endgidmax]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[endgidmax]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[anatt]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[anatt]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[anacap]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[anacap]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[anagrpmax]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[anagrpmax]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[nanagrpid]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[nanagrpid]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[pels]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[pels]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[domainid]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[domainid]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[megcap]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[megcap]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[sqes]="0x66"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[sqes]=0x66 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[cqes]="0x44"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[cqes]=0x44 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcmd]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[maxcmd]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[nn]="256"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[nn]=256 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[oncs]="0x15d"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[oncs]=0x15d 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fuses]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fuses]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.044 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fna]="0"' 00:15:05.044 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fna]=0 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.044 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[vwc]="0x7"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[vwc]=0x7 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[awun]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[awun]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[awupf]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[awupf]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[icsvscc]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[icsvscc]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[nwpc]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[nwpc]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[acwu]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[acwu]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ocfs]="0x3"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ocfs]=0x3 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[sgls]="0x1"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[sgls]=0x1 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[mnan]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[mnan]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[maxdna]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[maxdna]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcna]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[maxcna]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[oaqd]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[oaqd]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12341 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ioccsz]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ioccsz]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[iorcsz]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[iorcsz]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[icdoff]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[icdoff]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[fcatt]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[fcatt]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[msdbd]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[msdbd]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ofcs]="0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ofcs]=0 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1[active_power_workload]="-"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1[active_power_workload]=- 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme1_ns 00:15:05.045 07:36:04 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:05.045 07:36:04 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@58 -- # ns_dev=nvme1n1 00:15:05.045 07:36:04 -- nvme/functions.sh@59 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:05.045 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme1n1 reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:05.045 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme1n1=()' 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n1 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsze]="0x100000"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nsze]=0x100000 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[ncap]="0x100000"' 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[ncap]=0x100000 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.045 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.045 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.045 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nuse]="0x100000"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nuse]=0x100000 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nsfeat]=0x14 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nlbaf]=7 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[flbas]="0x4"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[flbas]=0x4 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mc]="0x3"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[mc]=0x3 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[dpc]=0x1f 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dps]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[dps]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nmic]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nmic]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[rescap]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[rescap]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[fpi]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[fpi]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[dlfeat]=1 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawun]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nawun]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawupf]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nawupf]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nacwu]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nacwu]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabsn]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nabsn]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabo]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nabo]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabspf]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nabspf]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[noiob]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[noiob]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nvmcap]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwg]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[npwg]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwa]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[npwa]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.046 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npdg]="0"' 00:15:05.046 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[npdg]=0 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.046 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npda]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[npda]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nows]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nows]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mssrl]="128"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[mssrl]=128 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mcl]="128"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[mcl]=128 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[msrc]="127"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[msrc]=127 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nulbaf]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[anagrpid]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsattr]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nsattr]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nvmsetid]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[endgid]="0"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[endgid]=0 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[eui64]=0000000000000000 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:05.047 07:36:04 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:05.047 07:36:04 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@58 -- # ns_dev=nvme1n2 00:15:05.047 07:36:04 -- nvme/functions.sh@59 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:15:05.047 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme1n2 reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:05.047 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme1n2=()' 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n2 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsze]="0x100000"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nsze]=0x100000 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[ncap]="0x100000"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[ncap]=0x100000 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nuse]="0x100000"' 00:15:05.047 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nuse]=0x100000 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.047 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.047 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nsfeat]=0x14 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nlbaf]="7"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nlbaf]=7 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[flbas]="0x4"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[flbas]=0x4 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mc]="0x3"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[mc]=0x3 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dpc]="0x1f"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[dpc]=0x1f 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dps]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[dps]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nmic]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nmic]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[rescap]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[rescap]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[fpi]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[fpi]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dlfeat]="1"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[dlfeat]=1 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nawun]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nawun]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nawupf]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nawupf]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nacwu]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nacwu]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabsn]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nabsn]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabo]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nabo]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabspf]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nabspf]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[noiob]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[noiob]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nvmcap]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nvmcap]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npwg]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[npwg]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npwa]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[npwa]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npdg]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[npdg]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.048 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npda]="0"' 00:15:05.048 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[npda]=0 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.048 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nows]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nows]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mssrl]="128"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[mssrl]=128 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mcl]="128"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[mcl]=128 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[msrc]="127"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[msrc]=127 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nulbaf]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nulbaf]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[anagrpid]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[anagrpid]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsattr]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nsattr]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nvmsetid]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nvmsetid]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[endgid]="0"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[endgid]=0 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[eui64]=0000000000000000 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:05.049 07:36:04 -- nvme/functions.sh@25 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.049 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:15:05.049 07:36:04 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:05.049 07:36:04 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:15:05.049 07:36:04 -- nvme/functions.sh@58 -- # ns_dev=nvme1n3 00:15:05.049 07:36:04 -- nvme/functions.sh@59 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:15:05.049 07:36:04 -- nvme/functions.sh@19 -- # local ref=nvme1n3 reg val 00:15:05.049 07:36:04 -- nvme/functions.sh@20 -- # shift 00:15:05.050 07:36:04 -- nvme/functions.sh@22 -- # local -gA 'nvme1n3=()' 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n3 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsze]="0x100000"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nsze]=0x100000 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[ncap]="0x100000"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[ncap]=0x100000 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nuse]="0x100000"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nuse]=0x100000 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nsfeat]=0x14 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nlbaf]="7"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nlbaf]=7 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[flbas]="0x4"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[flbas]=0x4 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mc]="0x3"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[mc]=0x3 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dpc]="0x1f"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[dpc]=0x1f 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dps]="0"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[dps]=0 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nmic]="0"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nmic]=0 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[rescap]="0"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[rescap]=0 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[fpi]="0"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[fpi]=0 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.050 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.050 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dlfeat]="1"' 00:15:05.050 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[dlfeat]=1 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nawun]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nawun]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nawupf]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nawupf]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nacwu]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nacwu]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabsn]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nabsn]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabo]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nabo]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabspf]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nabspf]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[noiob]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[noiob]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nvmcap]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nvmcap]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npwg]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[npwg]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npwa]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[npwa]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npdg]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[npdg]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npda]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[npda]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nows]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nows]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mssrl]="128"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[mssrl]=128 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mcl]="128"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[mcl]=128 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[msrc]="127"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[msrc]=127 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nulbaf]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nulbaf]=0 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.051 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.051 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[anagrpid]="0"' 00:15:05.051 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[anagrpid]=0 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsattr]="0"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nsattr]=0 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nvmsetid]="0"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nvmsetid]=0 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[endgid]="0"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[endgid]=0 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[eui64]=0000000000000000 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:05.052 07:36:04 -- nvme/functions.sh@25 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # IFS=: 00:15:05.052 07:36:04 -- nvme/functions.sh@23 -- # read -r reg val 00:15:05.052 07:36:04 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:15:05.052 07:36:04 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme1 00:15:05.052 07:36:04 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme1_ns 00:15:05.052 07:36:04 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:11.0 00:15:05.052 07:36:04 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme1 00:15:05.052 07:36:04 -- nvme/functions.sh@67 -- # (( 2 > 0 )) 00:15:05.052 07:36:04 -- nvme/functions.sh@219 -- # local _ctrls ctrl 00:15:05.052 07:36:04 -- nvme/functions.sh@220 -- # local unvmcap tnvmcap cntlid size blksize=512 00:15:05.052 07:36:04 -- nvme/functions.sh@222 -- # _ctrls=($(get_nvme_with_ns_management)) 00:15:05.052 07:36:04 -- nvme/functions.sh@222 -- # get_nvme_with_ns_management 00:15:05.052 07:36:04 -- nvme/functions.sh@157 -- # local _ctrls 00:15:05.052 07:36:04 -- nvme/functions.sh@159 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:15:05.053 07:36:04 -- nvme/functions.sh@159 -- # get_nvmes_with_ns_management 00:15:05.053 07:36:04 -- nvme/functions.sh@146 -- # (( 2 == 0 )) 00:15:05.053 07:36:04 -- nvme/functions.sh@148 -- # local ctrl 00:15:05.053 07:36:04 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:15:05.053 07:36:04 -- nvme/functions.sh@150 -- # get_oacs nvme1 nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@123 -- # local ctrl=nvme1 bit=nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@124 -- # local -A bits 00:15:05.053 07:36:04 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:15:05.053 07:36:04 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:15:05.053 07:36:04 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:15:05.053 07:36:04 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:15:05.053 07:36:04 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:15:05.053 07:36:04 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:15:05.053 07:36:04 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:15:05.053 07:36:04 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:15:05.053 07:36:04 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:15:05.053 07:36:04 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:15:05.053 07:36:04 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:15:05.053 07:36:04 -- nvme/functions.sh@139 -- # bit=nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme1 oacs 00:15:05.053 07:36:04 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=oacs 00:15:05.053 07:36:04 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@78 -- # echo 0x12a 00:15:05.053 07:36:04 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:15:05.053 07:36:04 -- nvme/functions.sh@150 -- # echo nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:15:05.053 07:36:04 -- nvme/functions.sh@150 -- # get_oacs nvme0 nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@123 -- # local ctrl=nvme0 bit=nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@124 -- # local -A bits 00:15:05.053 07:36:04 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:15:05.053 07:36:04 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:15:05.053 07:36:04 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:15:05.053 07:36:04 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:15:05.053 07:36:04 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:15:05.053 07:36:04 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:15:05.053 07:36:04 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:15:05.053 07:36:04 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:15:05.053 07:36:04 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:15:05.053 07:36:04 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:15:05.053 07:36:04 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:15:05.053 07:36:04 -- nvme/functions.sh@139 -- # bit=nsmgt 00:15:05.053 07:36:04 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme0 oacs 00:15:05.053 07:36:04 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=oacs 00:15:05.053 07:36:04 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:15:05.053 07:36:04 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@78 -- # echo 0x12a 00:15:05.053 07:36:04 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:15:05.053 07:36:04 -- nvme/functions.sh@150 -- # echo nvme0 00:15:05.053 07:36:04 -- nvme/functions.sh@153 -- # return 0 00:15:05.053 07:36:04 -- nvme/functions.sh@160 -- # (( 2 > 0 )) 00:15:05.053 07:36:04 -- nvme/functions.sh@161 -- # echo nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@162 -- # return 0 00:15:05.053 07:36:04 -- nvme/functions.sh@224 -- # for ctrl in "${_ctrls[@]}" 00:15:05.053 07:36:04 -- nvme/functions.sh@229 -- # get_nvme_ctrl_feature nvme1 unvmcap 00:15:05.053 07:36:04 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=unvmcap 00:15:05.053 07:36:04 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@78 -- # echo 0 00:15:05.053 07:36:04 -- nvme/functions.sh@229 -- # unvmcap=0 00:15:05.053 07:36:04 -- nvme/functions.sh@230 -- # get_nvme_ctrl_feature nvme1 tnvmcap 00:15:05.053 07:36:04 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=tnvmcap 00:15:05.053 07:36:04 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@78 -- # echo 0 00:15:05.053 07:36:04 -- nvme/functions.sh@230 -- # tnvmcap=0 00:15:05.053 07:36:04 -- nvme/functions.sh@231 -- # get_nvme_ctrl_feature nvme1 cntlid 00:15:05.053 07:36:04 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=cntlid 00:15:05.053 07:36:04 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:15:05.053 07:36:04 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:15:05.053 07:36:04 -- nvme/functions.sh@78 -- # echo 0 00:15:05.053 07:36:04 -- nvme/functions.sh@231 -- # cntlid=0 00:15:05.053 07:36:04 -- nvme/functions.sh@232 -- # (( unvmcap == 0 )) 00:15:05.053 07:36:04 -- nvme/functions.sh@234 -- # continue 00:15:05.053 07:36:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:15:05.053 07:36:04 -- common/autotest_common.sh@733 -- # xtrace_disable 00:15:05.053 07:36:04 -- common/autotest_common.sh@10 -- # set +x 00:15:05.316 07:36:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:15:05.316 07:36:04 -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:05.316 07:36:04 -- common/autotest_common.sh@10 -- # set +x 00:15:05.316 07:36:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:05.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:05.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:06.168 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:06.168 07:36:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:15:06.168 07:36:05 -- common/autotest_common.sh@733 -- # xtrace_disable 00:15:06.168 07:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:06.168 07:36:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:15:06.168 07:36:05 -- common/autotest_common.sh@1519 -- # local bdfs bdf bdf_id 00:15:06.168 07:36:05 -- common/autotest_common.sh@1521 -- # mapfile -t bdfs 00:15:06.168 07:36:05 -- common/autotest_common.sh@1521 -- # get_nvme_bdfs_by_id 0x0a54 00:15:06.168 07:36:05 -- common/autotest_common.sh@1503 -- # bdfs=() 00:15:06.168 07:36:05 -- common/autotest_common.sh@1503 -- # _bdfs=() 00:15:06.168 07:36:05 -- common/autotest_common.sh@1503 -- # local bdfs _bdfs bdf 00:15:06.168 07:36:05 -- common/autotest_common.sh@1504 -- # _bdfs=($(get_nvme_bdfs)) 00:15:06.168 07:36:05 -- common/autotest_common.sh@1504 -- # get_nvme_bdfs 00:15:06.168 07:36:05 -- common/autotest_common.sh@1484 -- # bdfs=() 00:15:06.168 07:36:05 -- common/autotest_common.sh@1484 -- # local bdfs 00:15:06.168 07:36:05 -- common/autotest_common.sh@1485 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:06.168 07:36:05 -- common/autotest_common.sh@1485 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:06.168 07:36:05 -- common/autotest_common.sh@1485 -- # jq -r '.config[].params.traddr' 00:15:06.168 07:36:05 -- common/autotest_common.sh@1486 -- # (( 2 == 0 )) 00:15:06.168 07:36:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:06.168 07:36:05 -- common/autotest_common.sh@1506 -- # for bdf in "${_bdfs[@]}" 00:15:06.168 07:36:05 -- common/autotest_common.sh@1507 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:06.168 07:36:05 -- common/autotest_common.sh@1507 -- # device=0x0010 00:15:06.169 07:36:05 -- common/autotest_common.sh@1508 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:06.169 07:36:05 -- common/autotest_common.sh@1506 -- # for bdf in "${_bdfs[@]}" 00:15:06.169 07:36:05 -- common/autotest_common.sh@1507 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:15:06.169 07:36:05 -- common/autotest_common.sh@1507 -- # device=0x0010 00:15:06.169 07:36:05 -- common/autotest_common.sh@1508 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:06.169 07:36:05 -- common/autotest_common.sh@1513 -- # (( 0 > 0 )) 00:15:06.169 07:36:05 -- common/autotest_common.sh@1513 -- # return 0 00:15:06.169 07:36:05 -- common/autotest_common.sh@1522 -- # [[ -z '' ]] 00:15:06.169 07:36:05 -- common/autotest_common.sh@1523 -- # return 0 00:15:06.169 07:36:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:15:06.169 07:36:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:15:06.169 07:36:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:06.169 07:36:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:15:06.169 07:36:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:15:06.169 07:36:05 -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:06.169 07:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:06.169 07:36:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:15:06.169 07:36:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:06.169 07:36:05 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:06.169 07:36:05 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:06.169 07:36:05 -- common/autotest_common.sh@10 -- # set +x 00:15:06.169 ************************************ 00:15:06.169 START TEST env 00:15:06.169 ************************************ 00:15:06.169 07:36:05 env -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:06.425 * Looking for test storage... 00:15:06.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1626 -- # lcov --version 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:06.426 07:36:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.426 07:36:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.426 07:36:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.426 07:36:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.426 07:36:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.426 07:36:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.426 07:36:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.426 07:36:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.426 07:36:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.426 07:36:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.426 07:36:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.426 07:36:05 env -- scripts/common.sh@344 -- # case "$op" in 00:15:06.426 07:36:05 env -- scripts/common.sh@345 -- # : 1 00:15:06.426 07:36:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.426 07:36:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.426 07:36:05 env -- scripts/common.sh@365 -- # decimal 1 00:15:06.426 07:36:05 env -- scripts/common.sh@353 -- # local d=1 00:15:06.426 07:36:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.426 07:36:05 env -- scripts/common.sh@355 -- # echo 1 00:15:06.426 07:36:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.426 07:36:05 env -- scripts/common.sh@366 -- # decimal 2 00:15:06.426 07:36:05 env -- scripts/common.sh@353 -- # local d=2 00:15:06.426 07:36:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.426 07:36:05 env -- scripts/common.sh@355 -- # echo 2 00:15:06.426 07:36:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.426 07:36:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.426 07:36:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.426 07:36:05 env -- scripts/common.sh@368 -- # return 0 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.426 --rc genhtml_branch_coverage=1 00:15:06.426 --rc genhtml_function_coverage=1 00:15:06.426 --rc genhtml_legend=1 00:15:06.426 --rc geninfo_all_blocks=1 00:15:06.426 --rc geninfo_unexecuted_blocks=1 00:15:06.426 00:15:06.426 ' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.426 --rc genhtml_branch_coverage=1 00:15:06.426 --rc genhtml_function_coverage=1 00:15:06.426 --rc genhtml_legend=1 00:15:06.426 --rc geninfo_all_blocks=1 00:15:06.426 --rc geninfo_unexecuted_blocks=1 00:15:06.426 00:15:06.426 ' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.426 --rc genhtml_branch_coverage=1 00:15:06.426 --rc genhtml_function_coverage=1 00:15:06.426 --rc genhtml_legend=1 00:15:06.426 --rc geninfo_all_blocks=1 00:15:06.426 --rc geninfo_unexecuted_blocks=1 00:15:06.426 00:15:06.426 ' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.426 --rc genhtml_branch_coverage=1 00:15:06.426 --rc genhtml_function_coverage=1 00:15:06.426 --rc genhtml_legend=1 00:15:06.426 --rc geninfo_all_blocks=1 00:15:06.426 --rc geninfo_unexecuted_blocks=1 00:15:06.426 00:15:06.426 ' 00:15:06.426 07:36:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:06.426 07:36:05 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:06.426 07:36:05 env -- common/autotest_common.sh@10 -- # set +x 00:15:06.426 ************************************ 00:15:06.426 START TEST env_memory 00:15:06.426 ************************************ 00:15:06.426 07:36:05 env.env_memory -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:06.426 00:15:06.426 00:15:06.426 CUnit - A unit testing framework for C - Version 2.1-3 00:15:06.426 http://cunit.sourceforge.net/ 00:15:06.426 00:15:06.426 00:15:06.426 Suite: memory 00:15:06.426 Test: alloc and free memory map ...[2024-10-07 07:36:05.983012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:06.684 passed 00:15:06.684 Test: mem map translation ...[2024-10-07 07:36:06.033786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:06.684 [2024-10-07 07:36:06.033857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:06.684 [2024-10-07 07:36:06.033939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:06.684 [2024-10-07 07:36:06.033987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:06.684 passed 00:15:06.684 Test: mem map registration ...[2024-10-07 07:36:06.113145] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:15:06.684 [2024-10-07 07:36:06.113209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:15:06.684 passed 00:15:06.684 Test: mem map adjacent registrations ...passed 00:15:06.684 00:15:06.684 Run Summary: Type Total Ran Passed Failed Inactive 00:15:06.684 suites 1 1 n/a 0 0 00:15:06.684 tests 4 4 4 0 0 00:15:06.684 asserts 152 152 152 0 n/a 00:15:06.684 00:15:06.684 Elapsed time = 0.278 seconds 00:15:06.684 00:15:06.684 real 0m0.322s 00:15:06.684 user 0m0.283s 00:15:06.684 sys 0m0.031s 00:15:06.684 ************************************ 00:15:06.684 END TEST env_memory 00:15:06.684 ************************************ 00:15:06.684 07:36:06 env.env_memory -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:06.684 07:36:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:15:06.942 07:36:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:06.942 07:36:06 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:06.942 07:36:06 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:06.942 07:36:06 env -- common/autotest_common.sh@10 -- # set +x 00:15:06.942 ************************************ 00:15:06.942 START TEST env_vtophys 00:15:06.942 ************************************ 00:15:06.942 07:36:06 env.env_vtophys -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:06.942 EAL: lib.eal log level changed from notice to debug 00:15:06.942 EAL: Detected lcore 0 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 1 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 2 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 3 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 4 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 5 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 6 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 7 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 8 as core 0 on socket 0 00:15:06.942 EAL: Detected lcore 9 as core 0 on socket 0 00:15:06.942 EAL: Maximum logical cores by configuration: 128 00:15:06.942 EAL: Detected CPU lcores: 10 00:15:06.942 EAL: Detected NUMA nodes: 1 00:15:06.942 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:15:06.942 EAL: Detected shared linkage of DPDK 00:15:06.942 EAL: No shared files mode enabled, IPC will be disabled 00:15:06.942 EAL: Selected IOVA mode 'PA' 00:15:06.942 EAL: Probing VFIO support... 00:15:06.942 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:06.942 EAL: VFIO modules not loaded, skipping VFIO support... 00:15:06.942 EAL: Ask a virtual area of 0x2e000 bytes 00:15:06.942 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:06.942 EAL: Setting up physically contiguous memory... 00:15:06.942 EAL: Setting maximum number of open files to 524288 00:15:06.942 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:06.942 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:06.942 EAL: Ask a virtual area of 0x61000 bytes 00:15:06.942 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:06.942 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:06.942 EAL: Ask a virtual area of 0x400000000 bytes 00:15:06.942 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:06.942 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:06.942 EAL: Ask a virtual area of 0x61000 bytes 00:15:06.942 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:06.942 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:06.942 EAL: Ask a virtual area of 0x400000000 bytes 00:15:06.942 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:06.942 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:06.942 EAL: Ask a virtual area of 0x61000 bytes 00:15:06.942 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:06.942 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:06.942 EAL: Ask a virtual area of 0x400000000 bytes 00:15:06.942 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:06.942 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:06.942 EAL: Ask a virtual area of 0x61000 bytes 00:15:06.942 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:06.942 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:06.942 EAL: Ask a virtual area of 0x400000000 bytes 00:15:06.942 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:06.942 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:06.942 EAL: Hugepages will be freed exactly as allocated. 00:15:06.942 EAL: No shared files mode enabled, IPC is disabled 00:15:06.942 EAL: No shared files mode enabled, IPC is disabled 00:15:06.942 EAL: TSC frequency is ~2100000 KHz 00:15:06.942 EAL: Main lcore 0 is ready (tid=7fbb5c10aa40;cpuset=[0]) 00:15:06.942 EAL: Trying to obtain current memory policy. 00:15:06.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:06.942 EAL: Restoring previous memory policy: 0 00:15:06.942 EAL: request: mp_malloc_sync 00:15:06.942 EAL: No shared files mode enabled, IPC is disabled 00:15:06.942 EAL: Heap on socket 0 was expanded by 2MB 00:15:06.942 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:06.942 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:06.942 EAL: Mem event callback 'spdk:(nil)' registered 00:15:06.942 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:15:07.200 00:15:07.200 00:15:07.200 CUnit - A unit testing framework for C - Version 2.1-3 00:15:07.200 http://cunit.sourceforge.net/ 00:15:07.200 00:15:07.200 00:15:07.200 Suite: components_suite 00:15:07.764 Test: vtophys_malloc_test ...passed 00:15:07.764 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:07.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:07.764 EAL: Restoring previous memory policy: 4 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was expanded by 4MB 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was shrunk by 4MB 00:15:07.764 EAL: Trying to obtain current memory policy. 00:15:07.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:07.764 EAL: Restoring previous memory policy: 4 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was expanded by 6MB 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was shrunk by 6MB 00:15:07.764 EAL: Trying to obtain current memory policy. 00:15:07.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:07.764 EAL: Restoring previous memory policy: 4 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was expanded by 10MB 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was shrunk by 10MB 00:15:07.764 EAL: Trying to obtain current memory policy. 00:15:07.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:07.764 EAL: Restoring previous memory policy: 4 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was expanded by 18MB 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was shrunk by 18MB 00:15:07.764 EAL: Trying to obtain current memory policy. 00:15:07.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:07.764 EAL: Restoring previous memory policy: 4 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was expanded by 34MB 00:15:07.764 EAL: Calling mem event callback 'spdk:(nil)' 00:15:07.764 EAL: request: mp_malloc_sync 00:15:07.764 EAL: No shared files mode enabled, IPC is disabled 00:15:07.764 EAL: Heap on socket 0 was shrunk by 34MB 00:15:08.021 EAL: Trying to obtain current memory policy. 00:15:08.021 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:08.021 EAL: Restoring previous memory policy: 4 00:15:08.021 EAL: Calling mem event callback 'spdk:(nil)' 00:15:08.021 EAL: request: mp_malloc_sync 00:15:08.021 EAL: No shared files mode enabled, IPC is disabled 00:15:08.021 EAL: Heap on socket 0 was expanded by 66MB 00:15:08.021 EAL: Calling mem event callback 'spdk:(nil)' 00:15:08.021 EAL: request: mp_malloc_sync 00:15:08.021 EAL: No shared files mode enabled, IPC is disabled 00:15:08.021 EAL: Heap on socket 0 was shrunk by 66MB 00:15:08.278 EAL: Trying to obtain current memory policy. 00:15:08.278 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:08.278 EAL: Restoring previous memory policy: 4 00:15:08.278 EAL: Calling mem event callback 'spdk:(nil)' 00:15:08.278 EAL: request: mp_malloc_sync 00:15:08.278 EAL: No shared files mode enabled, IPC is disabled 00:15:08.278 EAL: Heap on socket 0 was expanded by 130MB 00:15:08.535 EAL: Calling mem event callback 'spdk:(nil)' 00:15:08.535 EAL: request: mp_malloc_sync 00:15:08.535 EAL: No shared files mode enabled, IPC is disabled 00:15:08.535 EAL: Heap on socket 0 was shrunk by 130MB 00:15:08.794 EAL: Trying to obtain current memory policy. 00:15:08.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:08.794 EAL: Restoring previous memory policy: 4 00:15:08.794 EAL: Calling mem event callback 'spdk:(nil)' 00:15:08.794 EAL: request: mp_malloc_sync 00:15:08.794 EAL: No shared files mode enabled, IPC is disabled 00:15:08.794 EAL: Heap on socket 0 was expanded by 258MB 00:15:09.375 EAL: Calling mem event callback 'spdk:(nil)' 00:15:09.375 EAL: request: mp_malloc_sync 00:15:09.375 EAL: No shared files mode enabled, IPC is disabled 00:15:09.375 EAL: Heap on socket 0 was shrunk by 258MB 00:15:09.973 EAL: Trying to obtain current memory policy. 00:15:09.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:09.973 EAL: Restoring previous memory policy: 4 00:15:09.973 EAL: Calling mem event callback 'spdk:(nil)' 00:15:09.973 EAL: request: mp_malloc_sync 00:15:09.973 EAL: No shared files mode enabled, IPC is disabled 00:15:09.973 EAL: Heap on socket 0 was expanded by 514MB 00:15:11.345 EAL: Calling mem event callback 'spdk:(nil)' 00:15:11.345 EAL: request: mp_malloc_sync 00:15:11.345 EAL: No shared files mode enabled, IPC is disabled 00:15:11.345 EAL: Heap on socket 0 was shrunk by 514MB 00:15:12.280 EAL: Trying to obtain current memory policy. 00:15:12.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:12.609 EAL: Restoring previous memory policy: 4 00:15:12.609 EAL: Calling mem event callback 'spdk:(nil)' 00:15:12.609 EAL: request: mp_malloc_sync 00:15:12.609 EAL: No shared files mode enabled, IPC is disabled 00:15:12.609 EAL: Heap on socket 0 was expanded by 1026MB 00:15:14.514 EAL: Calling mem event callback 'spdk:(nil)' 00:15:14.772 EAL: request: mp_malloc_sync 00:15:14.772 EAL: No shared files mode enabled, IPC is disabled 00:15:14.772 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:16.675 passed 00:15:16.675 00:15:16.675 Run Summary: Type Total Ran Passed Failed Inactive 00:15:16.675 suites 1 1 n/a 0 0 00:15:16.675 tests 2 2 2 0 0 00:15:16.675 asserts 4410 4410 4410 0 n/a 00:15:16.675 00:15:16.675 Elapsed time = 9.619 seconds 00:15:16.675 EAL: Calling mem event callback 'spdk:(nil)' 00:15:16.675 EAL: request: mp_malloc_sync 00:15:16.675 EAL: No shared files mode enabled, IPC is disabled 00:15:16.675 EAL: Heap on socket 0 was shrunk by 2MB 00:15:16.675 EAL: No shared files mode enabled, IPC is disabled 00:15:16.675 EAL: No shared files mode enabled, IPC is disabled 00:15:16.675 EAL: No shared files mode enabled, IPC is disabled 00:15:16.934 00:15:16.934 real 0m9.972s 00:15:16.934 user 0m8.825s 00:15:16.934 sys 0m0.970s 00:15:16.934 07:36:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:16.934 07:36:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 ************************************ 00:15:16.934 END TEST env_vtophys 00:15:16.934 ************************************ 00:15:16.934 07:36:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:16.934 07:36:16 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:16.934 07:36:16 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:16.934 07:36:16 env -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 ************************************ 00:15:16.934 START TEST env_pci 00:15:16.934 ************************************ 00:15:16.934 07:36:16 env.env_pci -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:16.934 00:15:16.934 00:15:16.934 CUnit - A unit testing framework for C - Version 2.1-3 00:15:16.934 http://cunit.sourceforge.net/ 00:15:16.934 00:15:16.934 00:15:16.934 Suite: pci 00:15:16.934 Test: pci_hook ...[2024-10-07 07:36:16.349270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56382 has claimed it 00:15:16.934 passed 00:15:16.934 00:15:16.934 EAL: Cannot find device (10000:00:01.0) 00:15:16.934 EAL: Failed to attach device on primary process 00:15:16.934 Run Summary: Type Total Ran Passed Failed Inactive 00:15:16.934 suites 1 1 n/a 0 0 00:15:16.934 tests 1 1 1 0 0 00:15:16.934 asserts 25 25 25 0 n/a 00:15:16.934 00:15:16.934 Elapsed time = 0.008 seconds 00:15:16.934 00:15:16.934 real 0m0.086s 00:15:16.934 user 0m0.035s 00:15:16.934 sys 0m0.050s 00:15:16.934 07:36:16 env.env_pci -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:16.934 ************************************ 00:15:16.934 07:36:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 END TEST env_pci 00:15:16.934 ************************************ 00:15:16.934 07:36:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:16.934 07:36:16 env -- env/env.sh@15 -- # uname 00:15:16.934 07:36:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:16.934 07:36:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:16.934 07:36:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:16.934 07:36:16 env -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:15:16.934 07:36:16 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:16.934 07:36:16 env -- common/autotest_common.sh@10 -- # set +x 00:15:16.934 ************************************ 00:15:16.934 START TEST env_dpdk_post_init 00:15:16.934 ************************************ 00:15:16.934 07:36:16 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:17.193 EAL: Detected CPU lcores: 10 00:15:17.193 EAL: Detected NUMA nodes: 1 00:15:17.193 EAL: Detected shared linkage of DPDK 00:15:17.193 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:17.193 EAL: Selected IOVA mode 'PA' 00:15:17.193 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:17.193 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:17.193 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:17.193 Starting DPDK initialization... 00:15:17.193 Starting SPDK post initialization... 00:15:17.193 SPDK NVMe probe 00:15:17.193 Attaching to 0000:00:10.0 00:15:17.193 Attaching to 0000:00:11.0 00:15:17.193 Attached to 0000:00:10.0 00:15:17.193 Attached to 0000:00:11.0 00:15:17.193 Cleaning up... 00:15:17.193 00:15:17.193 real 0m0.286s 00:15:17.193 user 0m0.076s 00:15:17.193 sys 0m0.110s 00:15:17.193 07:36:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:17.193 ************************************ 00:15:17.193 END TEST env_dpdk_post_init 00:15:17.193 07:36:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:17.193 ************************************ 00:15:17.450 07:36:16 env -- env/env.sh@26 -- # uname 00:15:17.450 07:36:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:17.450 07:36:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:17.450 07:36:16 env -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:17.450 07:36:16 env -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:17.450 07:36:16 env -- common/autotest_common.sh@10 -- # set +x 00:15:17.450 ************************************ 00:15:17.450 START TEST env_mem_callbacks 00:15:17.450 ************************************ 00:15:17.450 07:36:16 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:17.450 EAL: Detected CPU lcores: 10 00:15:17.450 EAL: Detected NUMA nodes: 1 00:15:17.450 EAL: Detected shared linkage of DPDK 00:15:17.450 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:17.450 EAL: Selected IOVA mode 'PA' 00:15:17.709 00:15:17.709 00:15:17.709 CUnit - A unit testing framework for C - Version 2.1-3 00:15:17.709 http://cunit.sourceforge.net/ 00:15:17.709 00:15:17.709 00:15:17.709 Suite: memory 00:15:17.709 Test: test ... 00:15:17.709 register 0x200000200000 2097152 00:15:17.709 malloc 3145728 00:15:17.709 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:17.709 register 0x200000400000 4194304 00:15:17.709 buf 0x2000004fffc0 len 3145728 PASSED 00:15:17.709 malloc 64 00:15:17.709 buf 0x2000004ffec0 len 64 PASSED 00:15:17.709 malloc 4194304 00:15:17.709 register 0x200000800000 6291456 00:15:17.709 buf 0x2000009fffc0 len 4194304 PASSED 00:15:17.709 free 0x2000004fffc0 3145728 00:15:17.709 free 0x2000004ffec0 64 00:15:17.709 unregister 0x200000400000 4194304 PASSED 00:15:17.709 free 0x2000009fffc0 4194304 00:15:17.709 unregister 0x200000800000 6291456 PASSED 00:15:17.709 malloc 8388608 00:15:17.709 register 0x200000400000 10485760 00:15:17.709 buf 0x2000005fffc0 len 8388608 PASSED 00:15:17.709 free 0x2000005fffc0 8388608 00:15:17.709 unregister 0x200000400000 10485760 PASSED 00:15:17.709 passed 00:15:17.709 00:15:17.709 Run Summary: Type Total Ran Passed Failed Inactive 00:15:17.709 suites 1 1 n/a 0 0 00:15:17.709 tests 1 1 1 0 0 00:15:17.709 asserts 15 15 15 0 n/a 00:15:17.709 00:15:17.709 Elapsed time = 0.112 seconds 00:15:17.709 00:15:17.709 real 0m0.353s 00:15:17.709 user 0m0.157s 00:15:17.709 sys 0m0.091s 00:15:17.709 07:36:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:17.709 07:36:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 ************************************ 00:15:17.709 END TEST env_mem_callbacks 00:15:17.709 ************************************ 00:15:17.709 ************************************ 00:15:17.709 END TEST env 00:15:17.709 ************************************ 00:15:17.709 00:15:17.709 real 0m11.522s 00:15:17.709 user 0m9.582s 00:15:17.709 sys 0m1.558s 00:15:17.709 07:36:17 env -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:17.709 07:36:17 env -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 07:36:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:17.709 07:36:17 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:17.709 07:36:17 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:17.709 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 ************************************ 00:15:17.968 START TEST rpc 00:15:17.968 ************************************ 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:17.968 * Looking for test storage... 00:15:17.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.968 07:36:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.968 07:36:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.968 07:36:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.968 07:36:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.968 07:36:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.968 07:36:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:17.968 07:36:17 rpc -- scripts/common.sh@345 -- # : 1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.968 07:36:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.968 07:36:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@353 -- # local d=1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.968 07:36:17 rpc -- scripts/common.sh@355 -- # echo 1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.968 07:36:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@353 -- # local d=2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.968 07:36:17 rpc -- scripts/common.sh@355 -- # echo 2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.968 07:36:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.968 07:36:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.968 07:36:17 rpc -- scripts/common.sh@368 -- # return 0 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:17.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.968 --rc genhtml_branch_coverage=1 00:15:17.968 --rc genhtml_function_coverage=1 00:15:17.968 --rc genhtml_legend=1 00:15:17.968 --rc geninfo_all_blocks=1 00:15:17.968 --rc geninfo_unexecuted_blocks=1 00:15:17.968 00:15:17.968 ' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:17.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.968 --rc genhtml_branch_coverage=1 00:15:17.968 --rc genhtml_function_coverage=1 00:15:17.968 --rc genhtml_legend=1 00:15:17.968 --rc geninfo_all_blocks=1 00:15:17.968 --rc geninfo_unexecuted_blocks=1 00:15:17.968 00:15:17.968 ' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:17.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.968 --rc genhtml_branch_coverage=1 00:15:17.968 --rc genhtml_function_coverage=1 00:15:17.968 --rc genhtml_legend=1 00:15:17.968 --rc geninfo_all_blocks=1 00:15:17.968 --rc geninfo_unexecuted_blocks=1 00:15:17.968 00:15:17.968 ' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:17.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.968 --rc genhtml_branch_coverage=1 00:15:17.968 --rc genhtml_function_coverage=1 00:15:17.968 --rc genhtml_legend=1 00:15:17.968 --rc geninfo_all_blocks=1 00:15:17.968 --rc geninfo_unexecuted_blocks=1 00:15:17.968 00:15:17.968 ' 00:15:17.968 07:36:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56515 00:15:17.968 07:36:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:17.968 07:36:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:17.968 07:36:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56515 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@834 -- # '[' -z 56515 ']' 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:17.968 07:36:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 [2024-10-07 07:36:17.647587] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:18.227 [2024-10-07 07:36:17.648056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56515 ] 00:15:18.485 [2024-10-07 07:36:17.839590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.745 [2024-10-07 07:36:18.094855] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:18.745 [2024-10-07 07:36:18.095180] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56515' to capture a snapshot of events at runtime. 00:15:18.745 [2024-10-07 07:36:18.095345] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.745 [2024-10-07 07:36:18.095483] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.745 [2024-10-07 07:36:18.095524] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56515 for offline analysis/debug. 00:15:18.745 [2024-10-07 07:36:18.097152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.680 07:36:19 rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:19.680 07:36:19 rpc -- common/autotest_common.sh@867 -- # return 0 00:15:19.680 07:36:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:19.680 07:36:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:19.680 07:36:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:19.680 07:36:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:19.680 07:36:19 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:19.680 07:36:19 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:19.680 07:36:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 ************************************ 00:15:19.680 START TEST rpc_integrity 00:15:19.680 ************************************ 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # rpc_integrity 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.680 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:19.680 { 00:15:19.680 "name": "Malloc0", 00:15:19.680 "aliases": [ 00:15:19.680 "6f79d6e2-5386-4685-a8c0-e00099ad1916" 00:15:19.680 ], 00:15:19.680 "product_name": "Malloc disk", 00:15:19.680 "block_size": 512, 00:15:19.680 "num_blocks": 16384, 00:15:19.680 "uuid": "6f79d6e2-5386-4685-a8c0-e00099ad1916", 00:15:19.680 "assigned_rate_limits": { 00:15:19.680 "rw_ios_per_sec": 0, 00:15:19.680 "rw_mbytes_per_sec": 0, 00:15:19.680 "r_mbytes_per_sec": 0, 00:15:19.680 "w_mbytes_per_sec": 0 00:15:19.680 }, 00:15:19.681 "claimed": false, 00:15:19.681 "zoned": false, 00:15:19.681 "supported_io_types": { 00:15:19.681 "read": true, 00:15:19.681 "write": true, 00:15:19.681 "unmap": true, 00:15:19.681 "flush": true, 00:15:19.681 "reset": true, 00:15:19.681 "nvme_admin": false, 00:15:19.681 "nvme_io": false, 00:15:19.681 "nvme_io_md": false, 00:15:19.681 "write_zeroes": true, 00:15:19.681 "zcopy": true, 00:15:19.681 "get_zone_info": false, 00:15:19.681 "zone_management": false, 00:15:19.681 "zone_append": false, 00:15:19.681 "compare": false, 00:15:19.681 "compare_and_write": false, 00:15:19.681 "abort": true, 00:15:19.681 "seek_hole": false, 00:15:19.681 "seek_data": false, 00:15:19.681 "copy": true, 00:15:19.681 "nvme_iov_md": false 00:15:19.681 }, 00:15:19.681 "memory_domains": [ 00:15:19.681 { 00:15:19.681 "dma_device_id": "system", 00:15:19.681 "dma_device_type": 1 00:15:19.681 }, 00:15:19.681 { 00:15:19.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.681 "dma_device_type": 2 00:15:19.681 } 00:15:19.681 ], 00:15:19.681 "driver_specific": {} 00:15:19.681 } 00:15:19.681 ]' 00:15:19.681 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:19.940 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:19.940 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.940 [2024-10-07 07:36:19.279555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:19.940 [2024-10-07 07:36:19.279649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.940 [2024-10-07 07:36:19.279680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:19.940 [2024-10-07 07:36:19.279702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.940 [2024-10-07 07:36:19.283145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.940 [2024-10-07 07:36:19.283396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:19.940 Passthru0 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.940 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.940 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.940 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:19.940 { 00:15:19.940 "name": "Malloc0", 00:15:19.940 "aliases": [ 00:15:19.940 "6f79d6e2-5386-4685-a8c0-e00099ad1916" 00:15:19.940 ], 00:15:19.940 "product_name": "Malloc disk", 00:15:19.940 "block_size": 512, 00:15:19.940 "num_blocks": 16384, 00:15:19.940 "uuid": "6f79d6e2-5386-4685-a8c0-e00099ad1916", 00:15:19.940 "assigned_rate_limits": { 00:15:19.940 "rw_ios_per_sec": 0, 00:15:19.940 "rw_mbytes_per_sec": 0, 00:15:19.940 "r_mbytes_per_sec": 0, 00:15:19.940 "w_mbytes_per_sec": 0 00:15:19.940 }, 00:15:19.940 "claimed": true, 00:15:19.940 "claim_type": "exclusive_write", 00:15:19.940 "zoned": false, 00:15:19.940 "supported_io_types": { 00:15:19.940 "read": true, 00:15:19.940 "write": true, 00:15:19.940 "unmap": true, 00:15:19.940 "flush": true, 00:15:19.940 "reset": true, 00:15:19.940 "nvme_admin": false, 00:15:19.940 "nvme_io": false, 00:15:19.940 "nvme_io_md": false, 00:15:19.940 "write_zeroes": true, 00:15:19.940 "zcopy": true, 00:15:19.940 "get_zone_info": false, 00:15:19.941 "zone_management": false, 00:15:19.941 "zone_append": false, 00:15:19.941 "compare": false, 00:15:19.941 "compare_and_write": false, 00:15:19.941 "abort": true, 00:15:19.941 "seek_hole": false, 00:15:19.941 "seek_data": false, 00:15:19.941 "copy": true, 00:15:19.941 "nvme_iov_md": false 00:15:19.941 }, 00:15:19.941 "memory_domains": [ 00:15:19.941 { 00:15:19.941 "dma_device_id": "system", 00:15:19.941 "dma_device_type": 1 00:15:19.941 }, 00:15:19.941 { 00:15:19.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.941 "dma_device_type": 2 00:15:19.941 } 00:15:19.941 ], 00:15:19.941 "driver_specific": {} 00:15:19.941 }, 00:15:19.941 { 00:15:19.941 "name": "Passthru0", 00:15:19.941 "aliases": [ 00:15:19.941 "0f0726a7-1f5a-5197-9c67-59479d15690a" 00:15:19.941 ], 00:15:19.941 "product_name": "passthru", 00:15:19.941 "block_size": 512, 00:15:19.941 "num_blocks": 16384, 00:15:19.941 "uuid": "0f0726a7-1f5a-5197-9c67-59479d15690a", 00:15:19.941 "assigned_rate_limits": { 00:15:19.941 "rw_ios_per_sec": 0, 00:15:19.941 "rw_mbytes_per_sec": 0, 00:15:19.941 "r_mbytes_per_sec": 0, 00:15:19.941 "w_mbytes_per_sec": 0 00:15:19.941 }, 00:15:19.941 "claimed": false, 00:15:19.941 "zoned": false, 00:15:19.941 "supported_io_types": { 00:15:19.941 "read": true, 00:15:19.941 "write": true, 00:15:19.941 "unmap": true, 00:15:19.941 "flush": true, 00:15:19.941 "reset": true, 00:15:19.941 "nvme_admin": false, 00:15:19.941 "nvme_io": false, 00:15:19.941 "nvme_io_md": false, 00:15:19.941 "write_zeroes": true, 00:15:19.941 "zcopy": true, 00:15:19.941 "get_zone_info": false, 00:15:19.941 "zone_management": false, 00:15:19.941 "zone_append": false, 00:15:19.941 "compare": false, 00:15:19.941 "compare_and_write": false, 00:15:19.941 "abort": true, 00:15:19.941 "seek_hole": false, 00:15:19.941 "seek_data": false, 00:15:19.941 "copy": true, 00:15:19.941 "nvme_iov_md": false 00:15:19.941 }, 00:15:19.941 "memory_domains": [ 00:15:19.941 { 00:15:19.941 "dma_device_id": "system", 00:15:19.941 "dma_device_type": 1 00:15:19.941 }, 00:15:19.941 { 00:15:19.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.941 "dma_device_type": 2 00:15:19.941 } 00:15:19.941 ], 00:15:19.941 "driver_specific": { 00:15:19.941 "passthru": { 00:15:19.941 "name": "Passthru0", 00:15:19.941 "base_bdev_name": "Malloc0" 00:15:19.941 } 00:15:19.941 } 00:15:19.941 } 00:15:19.941 ]' 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:19.941 ************************************ 00:15:19.941 END TEST rpc_integrity 00:15:19.941 ************************************ 00:15:19.941 07:36:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:19.941 00:15:19.941 real 0m0.363s 00:15:19.941 user 0m0.194s 00:15:19.941 sys 0m0.061s 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:19.941 07:36:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 07:36:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:20.201 07:36:19 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:20.201 07:36:19 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:20.201 07:36:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 ************************************ 00:15:20.201 START TEST rpc_plugins 00:15:20.201 ************************************ 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # rpc_plugins 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:20.201 { 00:15:20.201 "name": "Malloc1", 00:15:20.201 "aliases": [ 00:15:20.201 "d60365fb-3c77-42a4-a964-d591a553454e" 00:15:20.201 ], 00:15:20.201 "product_name": "Malloc disk", 00:15:20.201 "block_size": 4096, 00:15:20.201 "num_blocks": 256, 00:15:20.201 "uuid": "d60365fb-3c77-42a4-a964-d591a553454e", 00:15:20.201 "assigned_rate_limits": { 00:15:20.201 "rw_ios_per_sec": 0, 00:15:20.201 "rw_mbytes_per_sec": 0, 00:15:20.201 "r_mbytes_per_sec": 0, 00:15:20.201 "w_mbytes_per_sec": 0 00:15:20.201 }, 00:15:20.201 "claimed": false, 00:15:20.201 "zoned": false, 00:15:20.201 "supported_io_types": { 00:15:20.201 "read": true, 00:15:20.201 "write": true, 00:15:20.201 "unmap": true, 00:15:20.201 "flush": true, 00:15:20.201 "reset": true, 00:15:20.201 "nvme_admin": false, 00:15:20.201 "nvme_io": false, 00:15:20.201 "nvme_io_md": false, 00:15:20.201 "write_zeroes": true, 00:15:20.201 "zcopy": true, 00:15:20.201 "get_zone_info": false, 00:15:20.201 "zone_management": false, 00:15:20.201 "zone_append": false, 00:15:20.201 "compare": false, 00:15:20.201 "compare_and_write": false, 00:15:20.201 "abort": true, 00:15:20.201 "seek_hole": false, 00:15:20.201 "seek_data": false, 00:15:20.201 "copy": true, 00:15:20.201 "nvme_iov_md": false 00:15:20.201 }, 00:15:20.201 "memory_domains": [ 00:15:20.201 { 00:15:20.201 "dma_device_id": "system", 00:15:20.201 "dma_device_type": 1 00:15:20.201 }, 00:15:20.201 { 00:15:20.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.201 "dma_device_type": 2 00:15:20.201 } 00:15:20.201 ], 00:15:20.201 "driver_specific": {} 00:15:20.201 } 00:15:20.201 ]' 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:20.201 07:36:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:20.201 00:15:20.201 real 0m0.169s 00:15:20.201 user 0m0.100s 00:15:20.201 sys 0m0.025s 00:15:20.201 ************************************ 00:15:20.201 END TEST rpc_plugins 00:15:20.201 ************************************ 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:20.201 07:36:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:20.460 07:36:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:20.460 07:36:19 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:20.460 07:36:19 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:20.460 07:36:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.460 ************************************ 00:15:20.460 START TEST rpc_trace_cmd_test 00:15:20.460 ************************************ 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # rpc_trace_cmd_test 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:20.460 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56515", 00:15:20.460 "tpoint_group_mask": "0x8", 00:15:20.460 "iscsi_conn": { 00:15:20.460 "mask": "0x2", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "scsi": { 00:15:20.460 "mask": "0x4", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "bdev": { 00:15:20.460 "mask": "0x8", 00:15:20.460 "tpoint_mask": "0xffffffffffffffff" 00:15:20.460 }, 00:15:20.460 "nvmf_rdma": { 00:15:20.460 "mask": "0x10", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "nvmf_tcp": { 00:15:20.460 "mask": "0x20", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "ftl": { 00:15:20.460 "mask": "0x40", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "blobfs": { 00:15:20.460 "mask": "0x80", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "dsa": { 00:15:20.460 "mask": "0x200", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "thread": { 00:15:20.460 "mask": "0x400", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "nvme_pcie": { 00:15:20.460 "mask": "0x800", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "iaa": { 00:15:20.460 "mask": "0x1000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "nvme_tcp": { 00:15:20.460 "mask": "0x2000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "bdev_nvme": { 00:15:20.460 "mask": "0x4000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "sock": { 00:15:20.460 "mask": "0x8000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "blob": { 00:15:20.460 "mask": "0x10000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "bdev_raid": { 00:15:20.460 "mask": "0x20000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 }, 00:15:20.460 "scheduler": { 00:15:20.460 "mask": "0x40000", 00:15:20.460 "tpoint_mask": "0x0" 00:15:20.460 } 00:15:20.460 }' 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:20.460 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:20.461 07:36:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:20.461 ************************************ 00:15:20.461 END TEST rpc_trace_cmd_test 00:15:20.461 ************************************ 00:15:20.461 07:36:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:20.461 00:15:20.461 real 0m0.242s 00:15:20.461 user 0m0.188s 00:15:20.461 sys 0m0.042s 00:15:20.461 07:36:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:20.461 07:36:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 07:36:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:20.719 07:36:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:20.719 07:36:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:20.719 07:36:20 rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:20.719 07:36:20 rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:20.719 07:36:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 ************************************ 00:15:20.719 START TEST rpc_daemon_integrity 00:15:20.719 ************************************ 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # rpc_integrity 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:20.719 { 00:15:20.719 "name": "Malloc2", 00:15:20.719 "aliases": [ 00:15:20.719 "6a5fd060-d88e-4ff6-8d0c-6142a5c68ee9" 00:15:20.719 ], 00:15:20.719 "product_name": "Malloc disk", 00:15:20.719 "block_size": 512, 00:15:20.719 "num_blocks": 16384, 00:15:20.719 "uuid": "6a5fd060-d88e-4ff6-8d0c-6142a5c68ee9", 00:15:20.719 "assigned_rate_limits": { 00:15:20.719 "rw_ios_per_sec": 0, 00:15:20.719 "rw_mbytes_per_sec": 0, 00:15:20.719 "r_mbytes_per_sec": 0, 00:15:20.719 "w_mbytes_per_sec": 0 00:15:20.719 }, 00:15:20.719 "claimed": false, 00:15:20.719 "zoned": false, 00:15:20.719 "supported_io_types": { 00:15:20.719 "read": true, 00:15:20.719 "write": true, 00:15:20.719 "unmap": true, 00:15:20.719 "flush": true, 00:15:20.719 "reset": true, 00:15:20.719 "nvme_admin": false, 00:15:20.719 "nvme_io": false, 00:15:20.719 "nvme_io_md": false, 00:15:20.719 "write_zeroes": true, 00:15:20.719 "zcopy": true, 00:15:20.719 "get_zone_info": false, 00:15:20.719 "zone_management": false, 00:15:20.719 "zone_append": false, 00:15:20.719 "compare": false, 00:15:20.719 "compare_and_write": false, 00:15:20.719 "abort": true, 00:15:20.719 "seek_hole": false, 00:15:20.719 "seek_data": false, 00:15:20.719 "copy": true, 00:15:20.719 "nvme_iov_md": false 00:15:20.719 }, 00:15:20.719 "memory_domains": [ 00:15:20.719 { 00:15:20.719 "dma_device_id": "system", 00:15:20.719 "dma_device_type": 1 00:15:20.719 }, 00:15:20.719 { 00:15:20.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.719 "dma_device_type": 2 00:15:20.719 } 00:15:20.719 ], 00:15:20.719 "driver_specific": {} 00:15:20.719 } 00:15:20.719 ]' 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.719 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.719 [2024-10-07 07:36:20.255110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:20.719 [2024-10-07 07:36:20.255195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.719 [2024-10-07 07:36:20.255226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.719 [2024-10-07 07:36:20.255244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.719 [2024-10-07 07:36:20.258330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.720 [2024-10-07 07:36:20.258549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:20.720 Passthru0 00:15:20.720 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.720 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:20.720 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.720 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.978 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.978 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:20.978 { 00:15:20.978 "name": "Malloc2", 00:15:20.978 "aliases": [ 00:15:20.978 "6a5fd060-d88e-4ff6-8d0c-6142a5c68ee9" 00:15:20.978 ], 00:15:20.978 "product_name": "Malloc disk", 00:15:20.978 "block_size": 512, 00:15:20.978 "num_blocks": 16384, 00:15:20.978 "uuid": "6a5fd060-d88e-4ff6-8d0c-6142a5c68ee9", 00:15:20.978 "assigned_rate_limits": { 00:15:20.978 "rw_ios_per_sec": 0, 00:15:20.978 "rw_mbytes_per_sec": 0, 00:15:20.978 "r_mbytes_per_sec": 0, 00:15:20.978 "w_mbytes_per_sec": 0 00:15:20.978 }, 00:15:20.978 "claimed": true, 00:15:20.978 "claim_type": "exclusive_write", 00:15:20.978 "zoned": false, 00:15:20.978 "supported_io_types": { 00:15:20.978 "read": true, 00:15:20.978 "write": true, 00:15:20.978 "unmap": true, 00:15:20.978 "flush": true, 00:15:20.978 "reset": true, 00:15:20.978 "nvme_admin": false, 00:15:20.978 "nvme_io": false, 00:15:20.978 "nvme_io_md": false, 00:15:20.978 "write_zeroes": true, 00:15:20.978 "zcopy": true, 00:15:20.978 "get_zone_info": false, 00:15:20.978 "zone_management": false, 00:15:20.978 "zone_append": false, 00:15:20.978 "compare": false, 00:15:20.978 "compare_and_write": false, 00:15:20.978 "abort": true, 00:15:20.978 "seek_hole": false, 00:15:20.978 "seek_data": false, 00:15:20.978 "copy": true, 00:15:20.978 "nvme_iov_md": false 00:15:20.978 }, 00:15:20.978 "memory_domains": [ 00:15:20.978 { 00:15:20.978 "dma_device_id": "system", 00:15:20.978 "dma_device_type": 1 00:15:20.978 }, 00:15:20.978 { 00:15:20.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.978 "dma_device_type": 2 00:15:20.978 } 00:15:20.978 ], 00:15:20.978 "driver_specific": {} 00:15:20.978 }, 00:15:20.978 { 00:15:20.978 "name": "Passthru0", 00:15:20.978 "aliases": [ 00:15:20.978 "e90e9d6c-71e6-5854-ad05-7065afbdf760" 00:15:20.978 ], 00:15:20.978 "product_name": "passthru", 00:15:20.978 "block_size": 512, 00:15:20.978 "num_blocks": 16384, 00:15:20.978 "uuid": "e90e9d6c-71e6-5854-ad05-7065afbdf760", 00:15:20.978 "assigned_rate_limits": { 00:15:20.978 "rw_ios_per_sec": 0, 00:15:20.978 "rw_mbytes_per_sec": 0, 00:15:20.978 "r_mbytes_per_sec": 0, 00:15:20.978 "w_mbytes_per_sec": 0 00:15:20.978 }, 00:15:20.978 "claimed": false, 00:15:20.978 "zoned": false, 00:15:20.978 "supported_io_types": { 00:15:20.978 "read": true, 00:15:20.979 "write": true, 00:15:20.979 "unmap": true, 00:15:20.979 "flush": true, 00:15:20.979 "reset": true, 00:15:20.979 "nvme_admin": false, 00:15:20.979 "nvme_io": false, 00:15:20.979 "nvme_io_md": false, 00:15:20.979 "write_zeroes": true, 00:15:20.979 "zcopy": true, 00:15:20.979 "get_zone_info": false, 00:15:20.979 "zone_management": false, 00:15:20.979 "zone_append": false, 00:15:20.979 "compare": false, 00:15:20.979 "compare_and_write": false, 00:15:20.979 "abort": true, 00:15:20.979 "seek_hole": false, 00:15:20.979 "seek_data": false, 00:15:20.979 "copy": true, 00:15:20.979 "nvme_iov_md": false 00:15:20.979 }, 00:15:20.979 "memory_domains": [ 00:15:20.979 { 00:15:20.979 "dma_device_id": "system", 00:15:20.979 "dma_device_type": 1 00:15:20.979 }, 00:15:20.979 { 00:15:20.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.979 "dma_device_type": 2 00:15:20.979 } 00:15:20.979 ], 00:15:20.979 "driver_specific": { 00:15:20.979 "passthru": { 00:15:20.979 "name": "Passthru0", 00:15:20.979 "base_bdev_name": "Malloc2" 00:15:20.979 } 00:15:20.979 } 00:15:20.979 } 00:15:20.979 ]' 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:20.979 ************************************ 00:15:20.979 END TEST rpc_daemon_integrity 00:15:20.979 ************************************ 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:20.979 00:15:20.979 real 0m0.389s 00:15:20.979 user 0m0.226s 00:15:20.979 sys 0m0.058s 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:20.979 07:36:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:20.979 07:36:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:20.979 07:36:20 rpc -- rpc/rpc.sh@84 -- # killprocess 56515 00:15:20.979 07:36:20 rpc -- common/autotest_common.sh@953 -- # '[' -z 56515 ']' 00:15:20.979 07:36:20 rpc -- common/autotest_common.sh@957 -- # kill -0 56515 00:15:20.979 07:36:20 rpc -- common/autotest_common.sh@958 -- # uname 00:15:20.979 07:36:20 rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:20.979 07:36:20 rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 56515 00:15:21.237 killing process with pid 56515 00:15:21.237 07:36:20 rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:21.237 07:36:20 rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:21.237 07:36:20 rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 56515' 00:15:21.237 07:36:20 rpc -- common/autotest_common.sh@972 -- # kill 56515 00:15:21.237 07:36:20 rpc -- common/autotest_common.sh@977 -- # wait 56515 00:15:24.579 00:15:24.579 real 0m6.312s 00:15:24.579 user 0m6.962s 00:15:24.579 sys 0m0.995s 00:15:24.579 07:36:23 rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:24.579 ************************************ 00:15:24.579 END TEST rpc 00:15:24.579 ************************************ 00:15:24.579 07:36:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.579 07:36:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:24.579 07:36:23 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:24.579 07:36:23 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:24.579 07:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:24.579 ************************************ 00:15:24.579 START TEST skip_rpc 00:15:24.579 ************************************ 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:24.579 * Looking for test storage... 00:15:24.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.579 07:36:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.579 --rc genhtml_branch_coverage=1 00:15:24.579 --rc genhtml_function_coverage=1 00:15:24.579 --rc genhtml_legend=1 00:15:24.579 --rc geninfo_all_blocks=1 00:15:24.579 --rc geninfo_unexecuted_blocks=1 00:15:24.579 00:15:24.579 ' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.579 --rc genhtml_branch_coverage=1 00:15:24.579 --rc genhtml_function_coverage=1 00:15:24.579 --rc genhtml_legend=1 00:15:24.579 --rc geninfo_all_blocks=1 00:15:24.579 --rc geninfo_unexecuted_blocks=1 00:15:24.579 00:15:24.579 ' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.579 --rc genhtml_branch_coverage=1 00:15:24.579 --rc genhtml_function_coverage=1 00:15:24.579 --rc genhtml_legend=1 00:15:24.579 --rc geninfo_all_blocks=1 00:15:24.579 --rc geninfo_unexecuted_blocks=1 00:15:24.579 00:15:24.579 ' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.579 --rc genhtml_branch_coverage=1 00:15:24.579 --rc genhtml_function_coverage=1 00:15:24.579 --rc genhtml_legend=1 00:15:24.579 --rc geninfo_all_blocks=1 00:15:24.579 --rc geninfo_unexecuted_blocks=1 00:15:24.579 00:15:24.579 ' 00:15:24.579 07:36:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:24.579 07:36:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:24.579 07:36:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:24.579 07:36:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.579 ************************************ 00:15:24.579 START TEST skip_rpc 00:15:24.579 ************************************ 00:15:24.579 07:36:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # test_skip_rpc 00:15:24.579 07:36:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56766 00:15:24.579 07:36:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:24.579 07:36:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:24.579 07:36:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:24.579 [2024-10-07 07:36:24.042937] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:24.580 [2024-10-07 07:36:24.043416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56766 ] 00:15:24.837 [2024-10-07 07:36:24.231012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.095 [2024-10-07 07:36:24.492886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # local es=0 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@656 -- # rpc_cmd spdk_get_version 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@656 -- # es=1 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56766 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' -z 56766 ']' 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # kill -0 56766 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # uname 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 56766 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 56766' 00:15:30.361 killing process with pid 56766 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # kill 56766 00:15:30.361 07:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@977 -- # wait 56766 00:15:32.890 ************************************ 00:15:32.890 END TEST skip_rpc 00:15:32.890 ************************************ 00:15:32.890 00:15:32.891 real 0m8.115s 00:15:32.891 user 0m7.530s 00:15:32.891 sys 0m0.473s 00:15:32.891 07:36:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:32.891 07:36:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 07:36:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:32.891 07:36:32 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:32.891 07:36:32 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:32.891 07:36:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 ************************************ 00:15:32.891 START TEST skip_rpc_with_json 00:15:32.891 ************************************ 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # test_skip_rpc_with_json 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56876 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56876 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # '[' -z 56876 ']' 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:32.891 07:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 [2024-10-07 07:36:32.209682] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:32.891 [2024-10-07 07:36:32.210101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56876 ] 00:15:32.891 [2024-10-07 07:36:32.396898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.149 [2024-10-07 07:36:32.664677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@867 -- # return 0 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:34.524 [2024-10-07 07:36:33.698936] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:34.524 request: 00:15:34.524 { 00:15:34.524 "trtype": "tcp", 00:15:34.524 "method": "nvmf_get_transports", 00:15:34.524 "req_id": 1 00:15:34.524 } 00:15:34.524 Got JSON-RPC error response 00:15:34.524 response: 00:15:34.524 { 00:15:34.524 "code": -19, 00:15:34.524 "message": "No such device" 00:15:34.524 } 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:34.524 [2024-10-07 07:36:33.707055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@564 -- # xtrace_disable 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:15:34.524 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:34.524 { 00:15:34.524 "subsystems": [ 00:15:34.524 { 00:15:34.524 "subsystem": "fsdev", 00:15:34.524 "config": [ 00:15:34.524 { 00:15:34.524 "method": "fsdev_set_opts", 00:15:34.524 "params": { 00:15:34.524 "fsdev_io_pool_size": 65535, 00:15:34.524 "fsdev_io_cache_size": 256 00:15:34.524 } 00:15:34.524 } 00:15:34.524 ] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "keyring", 00:15:34.524 "config": [] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "iobuf", 00:15:34.524 "config": [ 00:15:34.524 { 00:15:34.524 "method": "iobuf_set_options", 00:15:34.524 "params": { 00:15:34.524 "small_pool_count": 8192, 00:15:34.524 "large_pool_count": 1024, 00:15:34.524 "small_bufsize": 8192, 00:15:34.524 "large_bufsize": 135168 00:15:34.524 } 00:15:34.524 } 00:15:34.524 ] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "sock", 00:15:34.524 "config": [ 00:15:34.524 { 00:15:34.524 "method": "sock_set_default_impl", 00:15:34.524 "params": { 00:15:34.524 "impl_name": "posix" 00:15:34.524 } 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "method": "sock_impl_set_options", 00:15:34.524 "params": { 00:15:34.524 "impl_name": "ssl", 00:15:34.524 "recv_buf_size": 4096, 00:15:34.524 "send_buf_size": 4096, 00:15:34.524 "enable_recv_pipe": true, 00:15:34.524 "enable_quickack": false, 00:15:34.524 "enable_placement_id": 0, 00:15:34.524 "enable_zerocopy_send_server": true, 00:15:34.524 "enable_zerocopy_send_client": false, 00:15:34.524 "zerocopy_threshold": 0, 00:15:34.524 "tls_version": 0, 00:15:34.524 "enable_ktls": false 00:15:34.524 } 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "method": "sock_impl_set_options", 00:15:34.524 "params": { 00:15:34.524 "impl_name": "posix", 00:15:34.524 "recv_buf_size": 2097152, 00:15:34.524 "send_buf_size": 2097152, 00:15:34.524 "enable_recv_pipe": true, 00:15:34.524 "enable_quickack": false, 00:15:34.524 "enable_placement_id": 0, 00:15:34.524 "enable_zerocopy_send_server": true, 00:15:34.524 "enable_zerocopy_send_client": false, 00:15:34.524 "zerocopy_threshold": 0, 00:15:34.524 "tls_version": 0, 00:15:34.524 "enable_ktls": false 00:15:34.524 } 00:15:34.524 } 00:15:34.524 ] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "vmd", 00:15:34.524 "config": [] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "accel", 00:15:34.524 "config": [ 00:15:34.524 { 00:15:34.524 "method": "accel_set_options", 00:15:34.524 "params": { 00:15:34.524 "small_cache_size": 128, 00:15:34.524 "large_cache_size": 16, 00:15:34.524 "task_count": 2048, 00:15:34.524 "sequence_count": 2048, 00:15:34.524 "buf_count": 2048 00:15:34.524 } 00:15:34.524 } 00:15:34.524 ] 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "subsystem": "bdev", 00:15:34.524 "config": [ 00:15:34.524 { 00:15:34.524 "method": "bdev_set_options", 00:15:34.524 "params": { 00:15:34.524 "bdev_io_pool_size": 65535, 00:15:34.524 "bdev_io_cache_size": 256, 00:15:34.524 "bdev_auto_examine": true, 00:15:34.524 "iobuf_small_cache_size": 128, 00:15:34.524 "iobuf_large_cache_size": 16 00:15:34.524 } 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "method": "bdev_raid_set_options", 00:15:34.524 "params": { 00:15:34.524 "process_window_size_kb": 1024, 00:15:34.524 "process_max_bandwidth_mb_sec": 0 00:15:34.524 } 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "method": "bdev_iscsi_set_options", 00:15:34.524 "params": { 00:15:34.524 "timeout_sec": 30 00:15:34.524 } 00:15:34.524 }, 00:15:34.524 { 00:15:34.524 "method": "bdev_nvme_set_options", 00:15:34.524 "params": { 00:15:34.524 "action_on_timeout": "none", 00:15:34.524 "timeout_us": 0, 00:15:34.524 "timeout_admin_us": 0, 00:15:34.524 "keep_alive_timeout_ms": 10000, 00:15:34.524 "arbitration_burst": 0, 00:15:34.524 "low_priority_weight": 0, 00:15:34.524 "medium_priority_weight": 0, 00:15:34.524 "high_priority_weight": 0, 00:15:34.524 "nvme_adminq_poll_period_us": 10000, 00:15:34.524 "nvme_ioq_poll_period_us": 0, 00:15:34.524 "io_queue_requests": 0, 00:15:34.524 "delay_cmd_submit": true, 00:15:34.524 "transport_retry_count": 4, 00:15:34.525 "bdev_retry_count": 3, 00:15:34.525 "transport_ack_timeout": 0, 00:15:34.525 "ctrlr_loss_timeout_sec": 0, 00:15:34.525 "reconnect_delay_sec": 0, 00:15:34.525 "fast_io_fail_timeout_sec": 0, 00:15:34.525 "disable_auto_failback": false, 00:15:34.525 "generate_uuids": false, 00:15:34.525 "transport_tos": 0, 00:15:34.525 "nvme_error_stat": false, 00:15:34.525 "rdma_srq_size": 0, 00:15:34.525 "io_path_stat": false, 00:15:34.525 "allow_accel_sequence": false, 00:15:34.525 "rdma_max_cq_size": 0, 00:15:34.525 "rdma_cm_event_timeout_ms": 0, 00:15:34.525 "dhchap_digests": [ 00:15:34.525 "sha256", 00:15:34.525 "sha384", 00:15:34.525 "sha512" 00:15:34.525 ], 00:15:34.525 "dhchap_dhgroups": [ 00:15:34.525 "null", 00:15:34.525 "ffdhe2048", 00:15:34.525 "ffdhe3072", 00:15:34.525 "ffdhe4096", 00:15:34.525 "ffdhe6144", 00:15:34.525 "ffdhe8192" 00:15:34.525 ] 00:15:34.525 } 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "method": "bdev_nvme_set_hotplug", 00:15:34.525 "params": { 00:15:34.525 "period_us": 100000, 00:15:34.525 "enable": false 00:15:34.525 } 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "method": "bdev_wait_for_examine" 00:15:34.525 } 00:15:34.525 ] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "scsi", 00:15:34.525 "config": null 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "scheduler", 00:15:34.525 "config": [ 00:15:34.525 { 00:15:34.525 "method": "framework_set_scheduler", 00:15:34.525 "params": { 00:15:34.525 "name": "static" 00:15:34.525 } 00:15:34.525 } 00:15:34.525 ] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "vhost_scsi", 00:15:34.525 "config": [] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "vhost_blk", 00:15:34.525 "config": [] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "ublk", 00:15:34.525 "config": [] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "nbd", 00:15:34.525 "config": [] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "nvmf", 00:15:34.525 "config": [ 00:15:34.525 { 00:15:34.525 "method": "nvmf_set_config", 00:15:34.525 "params": { 00:15:34.525 "discovery_filter": "match_any", 00:15:34.525 "admin_cmd_passthru": { 00:15:34.525 "identify_ctrlr": false 00:15:34.525 }, 00:15:34.525 "dhchap_digests": [ 00:15:34.525 "sha256", 00:15:34.525 "sha384", 00:15:34.525 "sha512" 00:15:34.525 ], 00:15:34.525 "dhchap_dhgroups": [ 00:15:34.525 "null", 00:15:34.525 "ffdhe2048", 00:15:34.525 "ffdhe3072", 00:15:34.525 "ffdhe4096", 00:15:34.525 "ffdhe6144", 00:15:34.525 "ffdhe8192" 00:15:34.525 ] 00:15:34.525 } 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "method": "nvmf_set_max_subsystems", 00:15:34.525 "params": { 00:15:34.525 "max_subsystems": 1024 00:15:34.525 } 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "method": "nvmf_set_crdt", 00:15:34.525 "params": { 00:15:34.525 "crdt1": 0, 00:15:34.525 "crdt2": 0, 00:15:34.525 "crdt3": 0 00:15:34.525 } 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "method": "nvmf_create_transport", 00:15:34.525 "params": { 00:15:34.525 "trtype": "TCP", 00:15:34.525 "max_queue_depth": 128, 00:15:34.525 "max_io_qpairs_per_ctrlr": 127, 00:15:34.525 "in_capsule_data_size": 4096, 00:15:34.525 "max_io_size": 131072, 00:15:34.525 "io_unit_size": 131072, 00:15:34.525 "max_aq_depth": 128, 00:15:34.525 "num_shared_buffers": 511, 00:15:34.525 "buf_cache_size": 4294967295, 00:15:34.525 "dif_insert_or_strip": false, 00:15:34.525 "zcopy": false, 00:15:34.525 "c2h_success": true, 00:15:34.525 "sock_priority": 0, 00:15:34.525 "abort_timeout_sec": 1, 00:15:34.525 "ack_timeout": 0, 00:15:34.525 "data_wr_pool_size": 0 00:15:34.525 } 00:15:34.525 } 00:15:34.525 ] 00:15:34.525 }, 00:15:34.525 { 00:15:34.525 "subsystem": "iscsi", 00:15:34.525 "config": [ 00:15:34.525 { 00:15:34.525 "method": "iscsi_set_options", 00:15:34.525 "params": { 00:15:34.525 "node_base": "iqn.2016-06.io.spdk", 00:15:34.525 "max_sessions": 128, 00:15:34.525 "max_connections_per_session": 2, 00:15:34.525 "max_queue_depth": 64, 00:15:34.525 "default_time2wait": 2, 00:15:34.525 "default_time2retain": 20, 00:15:34.525 "first_burst_length": 8192, 00:15:34.525 "immediate_data": true, 00:15:34.525 "allow_duplicated_isid": false, 00:15:34.525 "error_recovery_level": 0, 00:15:34.525 "nop_timeout": 60, 00:15:34.525 "nop_in_interval": 30, 00:15:34.525 "disable_chap": false, 00:15:34.525 "require_chap": false, 00:15:34.525 "mutual_chap": false, 00:15:34.525 "chap_group": 0, 00:15:34.525 "max_large_datain_per_connection": 64, 00:15:34.525 "max_r2t_per_connection": 4, 00:15:34.525 "pdu_pool_size": 36864, 00:15:34.525 "immediate_data_pool_size": 16384, 00:15:34.525 "data_out_pool_size": 2048 00:15:34.525 } 00:15:34.525 } 00:15:34.525 ] 00:15:34.525 } 00:15:34.525 ] 00:15:34.525 } 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56876 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' -z 56876 ']' 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # kill -0 56876 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # uname 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 56876 00:15:34.525 killing process with pid 56876 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # echo 'killing process with pid 56876' 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # kill 56876 00:15:34.525 07:36:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@977 -- # wait 56876 00:15:37.813 07:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56943 00:15:37.813 07:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:37.813 07:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56943 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' -z 56943 ']' 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # kill -0 56943 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # uname 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 56943 00:15:43.081 killing process with pid 56943 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # echo 'killing process with pid 56943' 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # kill 56943 00:15:43.081 07:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@977 -- # wait 56943 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:45.609 00:15:45.609 real 0m12.762s 00:15:45.609 user 0m12.326s 00:15:45.609 sys 0m1.027s 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:45.609 ************************************ 00:15:45.609 END TEST skip_rpc_with_json 00:15:45.609 ************************************ 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 07:36:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:45.609 07:36:44 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:45.609 07:36:44 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:45.609 07:36:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 ************************************ 00:15:45.609 START TEST skip_rpc_with_delay 00:15:45.609 ************************************ 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # test_skip_rpc_with_delay 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # local es=0 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:45.609 07:36:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@656 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:45.609 [2024-10-07 07:36:44.992810] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:45.609 [2024-10-07 07:36:44.992986] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:45.609 ************************************ 00:15:45.609 END TEST skip_rpc_with_delay 00:15:45.609 ************************************ 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@656 -- # es=1 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:15:45.609 00:15:45.609 real 0m0.169s 00:15:45.609 user 0m0.079s 00:15:45.609 sys 0m0.087s 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:45.609 07:36:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 07:36:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:45.609 07:36:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:45.609 07:36:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:45.609 07:36:45 skip_rpc -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:45.609 07:36:45 skip_rpc -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:45.609 07:36:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 ************************************ 00:15:45.609 START TEST exit_on_failed_rpc_init 00:15:45.609 ************************************ 00:15:45.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # test_exit_on_failed_rpc_init 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57082 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57082 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # '[' -z 57082 ']' 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:45.609 07:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:45.868 [2024-10-07 07:36:45.267181] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:45.868 [2024-10-07 07:36:45.267414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57082 ] 00:15:46.126 [2024-10-07 07:36:45.457937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.385 [2024-10-07 07:36:45.692127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@867 -- # return 0 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # local es=0 00:15:47.320 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:47.321 07:36:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@656 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:47.321 [2024-10-07 07:36:46.752495] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:47.321 [2024-10-07 07:36:46.752972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57100 ] 00:15:47.579 [2024-10-07 07:36:46.940001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.837 [2024-10-07 07:36:47.239570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.837 [2024-10-07 07:36:47.239689] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:47.837 [2024-10-07 07:36:47.239725] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:47.837 [2024-10-07 07:36:47.239761] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@656 -- # es=234 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # es=106 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # case "$es" in 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # es=1 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57082 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' -z 57082 ']' 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # kill -0 57082 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # uname 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 57082 00:15:48.403 killing process with pid 57082 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # echo 'killing process with pid 57082' 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # kill 57082 00:15:48.403 07:36:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@977 -- # wait 57082 00:15:50.936 00:15:50.936 real 0m5.376s 00:15:50.936 user 0m6.125s 00:15:50.936 sys 0m0.736s 00:15:50.936 07:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:50.936 ************************************ 00:15:50.936 END TEST exit_on_failed_rpc_init 00:15:50.936 ************************************ 00:15:50.936 07:36:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:51.194 07:36:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:51.194 00:15:51.194 real 0m26.915s 00:15:51.194 user 0m26.284s 00:15:51.194 sys 0m2.596s 00:15:51.194 07:36:50 skip_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:51.194 07:36:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.194 ************************************ 00:15:51.194 END TEST skip_rpc 00:15:51.194 ************************************ 00:15:51.194 07:36:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:51.194 07:36:50 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:51.194 07:36:50 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:51.194 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.194 ************************************ 00:15:51.194 START TEST rpc_client 00:15:51.194 ************************************ 00:15:51.194 07:36:50 rpc_client -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:51.194 * Looking for test storage... 00:15:51.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:51.194 07:36:50 rpc_client -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:51.194 07:36:50 rpc_client -- common/autotest_common.sh@1626 -- # lcov --version 00:15:51.194 07:36:50 rpc_client -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:51.453 07:36:50 rpc_client -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.453 07:36:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.454 07:36:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.454 --rc genhtml_branch_coverage=1 00:15:51.454 --rc genhtml_function_coverage=1 00:15:51.454 --rc genhtml_legend=1 00:15:51.454 --rc geninfo_all_blocks=1 00:15:51.454 --rc geninfo_unexecuted_blocks=1 00:15:51.454 00:15:51.454 ' 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.454 --rc genhtml_branch_coverage=1 00:15:51.454 --rc genhtml_function_coverage=1 00:15:51.454 --rc genhtml_legend=1 00:15:51.454 --rc geninfo_all_blocks=1 00:15:51.454 --rc geninfo_unexecuted_blocks=1 00:15:51.454 00:15:51.454 ' 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.454 --rc genhtml_branch_coverage=1 00:15:51.454 --rc genhtml_function_coverage=1 00:15:51.454 --rc genhtml_legend=1 00:15:51.454 --rc geninfo_all_blocks=1 00:15:51.454 --rc geninfo_unexecuted_blocks=1 00:15:51.454 00:15:51.454 ' 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:51.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.454 --rc genhtml_branch_coverage=1 00:15:51.454 --rc genhtml_function_coverage=1 00:15:51.454 --rc genhtml_legend=1 00:15:51.454 --rc geninfo_all_blocks=1 00:15:51.454 --rc geninfo_unexecuted_blocks=1 00:15:51.454 00:15:51.454 ' 00:15:51.454 07:36:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:51.454 OK 00:15:51.454 07:36:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:51.454 ************************************ 00:15:51.454 END TEST rpc_client 00:15:51.454 ************************************ 00:15:51.454 00:15:51.454 real 0m0.278s 00:15:51.454 user 0m0.147s 00:15:51.454 sys 0m0.142s 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:51.454 07:36:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:51.454 07:36:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:51.454 07:36:50 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:51.454 07:36:50 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:51.454 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.454 ************************************ 00:15:51.454 START TEST json_config 00:15:51.454 ************************************ 00:15:51.454 07:36:50 json_config -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1626 -- # lcov --version 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.714 07:36:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.714 07:36:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.714 07:36:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.714 07:36:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.714 07:36:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.714 07:36:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:15:51.714 07:36:51 json_config -- scripts/common.sh@345 -- # : 1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.714 07:36:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.714 07:36:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@353 -- # local d=1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.714 07:36:51 json_config -- scripts/common.sh@355 -- # echo 1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.714 07:36:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@353 -- # local d=2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.714 07:36:51 json_config -- scripts/common.sh@355 -- # echo 2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.714 07:36:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.714 07:36:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.714 07:36:51 json_config -- scripts/common.sh@368 -- # return 0 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.714 --rc genhtml_branch_coverage=1 00:15:51.714 --rc genhtml_function_coverage=1 00:15:51.714 --rc genhtml_legend=1 00:15:51.714 --rc geninfo_all_blocks=1 00:15:51.714 --rc geninfo_unexecuted_blocks=1 00:15:51.714 00:15:51.714 ' 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.714 --rc genhtml_branch_coverage=1 00:15:51.714 --rc genhtml_function_coverage=1 00:15:51.714 --rc genhtml_legend=1 00:15:51.714 --rc geninfo_all_blocks=1 00:15:51.714 --rc geninfo_unexecuted_blocks=1 00:15:51.714 00:15:51.714 ' 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.714 --rc genhtml_branch_coverage=1 00:15:51.714 --rc genhtml_function_coverage=1 00:15:51.714 --rc genhtml_legend=1 00:15:51.714 --rc geninfo_all_blocks=1 00:15:51.714 --rc geninfo_unexecuted_blocks=1 00:15:51.714 00:15:51.714 ' 00:15:51.714 07:36:51 json_config -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.714 --rc genhtml_branch_coverage=1 00:15:51.714 --rc genhtml_function_coverage=1 00:15:51.714 --rc genhtml_legend=1 00:15:51.714 --rc geninfo_all_blocks=1 00:15:51.714 --rc geninfo_unexecuted_blocks=1 00:15:51.714 00:15:51.714 ' 00:15:51.714 07:36:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4d21d1-c360-43bc-be59-da89d43eb54f 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4d21d1-c360-43bc-be59-da89d43eb54f 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.714 07:36:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.714 07:36:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.714 07:36:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.714 07:36:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.714 07:36:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.714 07:36:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.714 07:36:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.714 07:36:51 json_config -- paths/export.sh@5 -- # export PATH 00:15:51.714 07:36:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@51 -- # : 0 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.714 07:36:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.715 07:36:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:51.715 WARNING: No tests are enabled so not running JSON configuration tests 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:15:51.715 07:36:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:15:51.715 ************************************ 00:15:51.715 END TEST json_config 00:15:51.715 ************************************ 00:15:51.715 00:15:51.715 real 0m0.239s 00:15:51.715 user 0m0.134s 00:15:51.715 sys 0m0.110s 00:15:51.715 07:36:51 json_config -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:51.715 07:36:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.715 07:36:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:51.715 07:36:51 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:51.715 07:36:51 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:51.715 07:36:51 -- common/autotest_common.sh@10 -- # set +x 00:15:51.715 ************************************ 00:15:51.715 START TEST json_config_extra_key 00:15:51.715 ************************************ 00:15:51.715 07:36:51 json_config_extra_key -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1626 -- # lcov --version 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:51.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.975 --rc genhtml_branch_coverage=1 00:15:51.975 --rc genhtml_function_coverage=1 00:15:51.975 --rc genhtml_legend=1 00:15:51.975 --rc geninfo_all_blocks=1 00:15:51.975 --rc geninfo_unexecuted_blocks=1 00:15:51.975 00:15:51.975 ' 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:51.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.975 --rc genhtml_branch_coverage=1 00:15:51.975 --rc genhtml_function_coverage=1 00:15:51.975 --rc genhtml_legend=1 00:15:51.975 --rc geninfo_all_blocks=1 00:15:51.975 --rc geninfo_unexecuted_blocks=1 00:15:51.975 00:15:51.975 ' 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:51.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.975 --rc genhtml_branch_coverage=1 00:15:51.975 --rc genhtml_function_coverage=1 00:15:51.975 --rc genhtml_legend=1 00:15:51.975 --rc geninfo_all_blocks=1 00:15:51.975 --rc geninfo_unexecuted_blocks=1 00:15:51.975 00:15:51.975 ' 00:15:51.975 07:36:51 json_config_extra_key -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:51.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.975 --rc genhtml_branch_coverage=1 00:15:51.975 --rc genhtml_function_coverage=1 00:15:51.975 --rc genhtml_legend=1 00:15:51.975 --rc geninfo_all_blocks=1 00:15:51.975 --rc geninfo_unexecuted_blocks=1 00:15:51.975 00:15:51.975 ' 00:15:51.975 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4d21d1-c360-43bc-be59-da89d43eb54f 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4d21d1-c360-43bc-be59-da89d43eb54f 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.975 07:36:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.975 07:36:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.975 07:36:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.975 07:36:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.975 07:36:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:51.975 07:36:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.975 07:36:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:51.976 INFO: launching applications... 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:51.976 07:36:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57339 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:51.976 Waiting for target to run... 00:15:51.976 07:36:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57339 /var/tmp/spdk_tgt.sock 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@834 -- # '[' -z 57339 ']' 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:51.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:51.976 07:36:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:52.234 [2024-10-07 07:36:51.591274] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:52.234 [2024-10-07 07:36:51.591456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57339 ] 00:15:52.493 [2024-10-07 07:36:52.008392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.751 [2024-10-07 07:36:52.207758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.690 07:36:52 json_config_extra_key -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:53.690 07:36:52 json_config_extra_key -- common/autotest_common.sh@867 -- # return 0 00:15:53.690 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:53.690 INFO: shutting down applications... 00:15:53.690 07:36:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:53.690 07:36:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57339 ]] 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57339 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:53.690 07:36:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:53.947 07:36:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:53.947 07:36:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:53.947 07:36:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:53.947 07:36:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:54.514 07:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:54.514 07:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:54.514 07:36:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:54.514 07:36:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:55.080 07:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:55.080 07:36:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:55.080 07:36:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:55.080 07:36:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:55.646 07:36:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:55.646 07:36:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:55.646 07:36:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:55.646 07:36:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:56.213 07:36:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:56.213 07:36:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:56.213 07:36:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:56.213 07:36:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:56.779 07:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:56.779 07:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:56.779 07:36:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:56.779 07:36:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57339 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:57.037 SPDK target shutdown done 00:15:57.037 07:36:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:57.037 Success 00:15:57.037 07:36:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:57.037 ************************************ 00:15:57.037 END TEST json_config_extra_key 00:15:57.037 ************************************ 00:15:57.037 00:15:57.037 real 0m5.301s 00:15:57.037 user 0m4.806s 00:15:57.037 sys 0m0.635s 00:15:57.037 07:36:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # xtrace_disable 00:15:57.037 07:36:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:57.037 07:36:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:57.037 07:36:56 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:15:57.037 07:36:56 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:15:57.037 07:36:56 -- common/autotest_common.sh@10 -- # set +x 00:15:57.037 ************************************ 00:15:57.037 START TEST alias_rpc 00:15:57.037 ************************************ 00:15:57.037 07:36:56 alias_rpc -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:57.296 * Looking for test storage... 00:15:57.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:57.296 07:36:56 alias_rpc -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:15:57.296 07:36:56 alias_rpc -- common/autotest_common.sh@1626 -- # lcov --version 00:15:57.296 07:36:56 alias_rpc -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:15:57.296 07:36:56 alias_rpc -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.296 07:36:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.297 07:36:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:15:57.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.297 --rc genhtml_branch_coverage=1 00:15:57.297 --rc genhtml_function_coverage=1 00:15:57.297 --rc genhtml_legend=1 00:15:57.297 --rc geninfo_all_blocks=1 00:15:57.297 --rc geninfo_unexecuted_blocks=1 00:15:57.297 00:15:57.297 ' 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:15:57.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.297 --rc genhtml_branch_coverage=1 00:15:57.297 --rc genhtml_function_coverage=1 00:15:57.297 --rc genhtml_legend=1 00:15:57.297 --rc geninfo_all_blocks=1 00:15:57.297 --rc geninfo_unexecuted_blocks=1 00:15:57.297 00:15:57.297 ' 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:15:57.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.297 --rc genhtml_branch_coverage=1 00:15:57.297 --rc genhtml_function_coverage=1 00:15:57.297 --rc genhtml_legend=1 00:15:57.297 --rc geninfo_all_blocks=1 00:15:57.297 --rc geninfo_unexecuted_blocks=1 00:15:57.297 00:15:57.297 ' 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:15:57.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.297 --rc genhtml_branch_coverage=1 00:15:57.297 --rc genhtml_function_coverage=1 00:15:57.297 --rc genhtml_legend=1 00:15:57.297 --rc geninfo_all_blocks=1 00:15:57.297 --rc geninfo_unexecuted_blocks=1 00:15:57.297 00:15:57.297 ' 00:15:57.297 07:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:57.297 07:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57463 00:15:57.297 07:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.297 07:36:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57463 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@834 -- # '[' -z 57463 ']' 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:15:57.297 07:36:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.556 [2024-10-07 07:36:56.957305] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:15:57.556 [2024-10-07 07:36:56.957516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57463 ] 00:15:57.815 [2024-10-07 07:36:57.147037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.074 [2024-10-07 07:36:57.479731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.070 07:36:58 alias_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:15:59.070 07:36:58 alias_rpc -- common/autotest_common.sh@867 -- # return 0 00:15:59.070 07:36:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:59.328 07:36:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57463 00:15:59.328 07:36:58 alias_rpc -- common/autotest_common.sh@953 -- # '[' -z 57463 ']' 00:15:59.328 07:36:58 alias_rpc -- common/autotest_common.sh@957 -- # kill -0 57463 00:15:59.328 07:36:58 alias_rpc -- common/autotest_common.sh@958 -- # uname 00:15:59.328 07:36:58 alias_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:15:59.328 07:36:58 alias_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 57463 00:15:59.587 07:36:58 alias_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:15:59.587 07:36:58 alias_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:15:59.587 killing process with pid 57463 00:15:59.587 07:36:58 alias_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 57463' 00:15:59.587 07:36:58 alias_rpc -- common/autotest_common.sh@972 -- # kill 57463 00:15:59.587 07:36:58 alias_rpc -- common/autotest_common.sh@977 -- # wait 57463 00:16:02.895 00:16:02.895 real 0m5.352s 00:16:02.895 user 0m5.526s 00:16:02.895 sys 0m0.690s 00:16:02.895 07:37:01 alias_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:02.895 07:37:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.895 ************************************ 00:16:02.895 END TEST alias_rpc 00:16:02.895 ************************************ 00:16:02.895 07:37:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:16:02.895 07:37:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:02.895 07:37:01 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:02.895 07:37:01 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:02.895 07:37:01 -- common/autotest_common.sh@10 -- # set +x 00:16:02.895 ************************************ 00:16:02.895 START TEST spdkcli_tcp 00:16:02.895 ************************************ 00:16:02.895 07:37:01 spdkcli_tcp -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:02.895 * Looking for test storage... 00:16:02.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1626 -- # lcov --version 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.895 07:37:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:02.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.895 --rc genhtml_branch_coverage=1 00:16:02.895 --rc genhtml_function_coverage=1 00:16:02.895 --rc genhtml_legend=1 00:16:02.895 --rc geninfo_all_blocks=1 00:16:02.895 --rc geninfo_unexecuted_blocks=1 00:16:02.895 00:16:02.895 ' 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:02.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.895 --rc genhtml_branch_coverage=1 00:16:02.895 --rc genhtml_function_coverage=1 00:16:02.895 --rc genhtml_legend=1 00:16:02.895 --rc geninfo_all_blocks=1 00:16:02.895 --rc geninfo_unexecuted_blocks=1 00:16:02.895 00:16:02.895 ' 00:16:02.895 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:02.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.895 --rc genhtml_branch_coverage=1 00:16:02.895 --rc genhtml_function_coverage=1 00:16:02.895 --rc genhtml_legend=1 00:16:02.895 --rc geninfo_all_blocks=1 00:16:02.895 --rc geninfo_unexecuted_blocks=1 00:16:02.896 00:16:02.896 ' 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.896 --rc genhtml_branch_coverage=1 00:16:02.896 --rc genhtml_function_coverage=1 00:16:02.896 --rc genhtml_legend=1 00:16:02.896 --rc geninfo_all_blocks=1 00:16:02.896 --rc geninfo_unexecuted_blocks=1 00:16:02.896 00:16:02.896 ' 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57593 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57593 00:16:02.896 07:37:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@834 -- # '[' -z 57593 ']' 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:02.896 07:37:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.896 [2024-10-07 07:37:02.364079] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:02.896 [2024-10-07 07:37:02.364665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57593 ] 00:16:03.154 [2024-10-07 07:37:02.540425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:03.412 [2024-10-07 07:37:02.860463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.412 [2024-10-07 07:37:02.860474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.347 07:37:03 spdkcli_tcp -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:04.347 07:37:03 spdkcli_tcp -- common/autotest_common.sh@867 -- # return 0 00:16:04.347 07:37:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57615 00:16:04.347 07:37:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:04.347 07:37:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:04.914 [ 00:16:04.914 "bdev_malloc_delete", 00:16:04.914 "bdev_malloc_create", 00:16:04.914 "bdev_null_resize", 00:16:04.914 "bdev_null_delete", 00:16:04.914 "bdev_null_create", 00:16:04.914 "bdev_nvme_cuse_unregister", 00:16:04.914 "bdev_nvme_cuse_register", 00:16:04.914 "bdev_opal_new_user", 00:16:04.914 "bdev_opal_set_lock_state", 00:16:04.914 "bdev_opal_delete", 00:16:04.914 "bdev_opal_get_info", 00:16:04.914 "bdev_opal_create", 00:16:04.914 "bdev_nvme_opal_revert", 00:16:04.914 "bdev_nvme_opal_init", 00:16:04.914 "bdev_nvme_send_cmd", 00:16:04.914 "bdev_nvme_set_keys", 00:16:04.914 "bdev_nvme_get_path_iostat", 00:16:04.914 "bdev_nvme_get_mdns_discovery_info", 00:16:04.914 "bdev_nvme_stop_mdns_discovery", 00:16:04.914 "bdev_nvme_start_mdns_discovery", 00:16:04.914 "bdev_nvme_set_multipath_policy", 00:16:04.914 "bdev_nvme_set_preferred_path", 00:16:04.914 "bdev_nvme_get_io_paths", 00:16:04.914 "bdev_nvme_remove_error_injection", 00:16:04.914 "bdev_nvme_add_error_injection", 00:16:04.914 "bdev_nvme_get_discovery_info", 00:16:04.914 "bdev_nvme_stop_discovery", 00:16:04.914 "bdev_nvme_start_discovery", 00:16:04.914 "bdev_nvme_get_controller_health_info", 00:16:04.914 "bdev_nvme_disable_controller", 00:16:04.914 "bdev_nvme_enable_controller", 00:16:04.914 "bdev_nvme_reset_controller", 00:16:04.914 "bdev_nvme_get_transport_statistics", 00:16:04.914 "bdev_nvme_apply_firmware", 00:16:04.914 "bdev_nvme_detach_controller", 00:16:04.914 "bdev_nvme_get_controllers", 00:16:04.914 "bdev_nvme_attach_controller", 00:16:04.914 "bdev_nvme_set_hotplug", 00:16:04.914 "bdev_nvme_set_options", 00:16:04.914 "bdev_passthru_delete", 00:16:04.914 "bdev_passthru_create", 00:16:04.914 "bdev_lvol_set_parent_bdev", 00:16:04.914 "bdev_lvol_set_parent", 00:16:04.914 "bdev_lvol_check_shallow_copy", 00:16:04.914 "bdev_lvol_start_shallow_copy", 00:16:04.914 "bdev_lvol_grow_lvstore", 00:16:04.914 "bdev_lvol_get_lvols", 00:16:04.914 "bdev_lvol_get_lvstores", 00:16:04.914 "bdev_lvol_delete", 00:16:04.914 "bdev_lvol_set_read_only", 00:16:04.914 "bdev_lvol_resize", 00:16:04.914 "bdev_lvol_decouple_parent", 00:16:04.914 "bdev_lvol_inflate", 00:16:04.914 "bdev_lvol_rename", 00:16:04.914 "bdev_lvol_clone_bdev", 00:16:04.914 "bdev_lvol_clone", 00:16:04.914 "bdev_lvol_snapshot", 00:16:04.914 "bdev_lvol_create", 00:16:04.914 "bdev_lvol_delete_lvstore", 00:16:04.914 "bdev_lvol_rename_lvstore", 00:16:04.914 "bdev_lvol_create_lvstore", 00:16:04.914 "bdev_raid_set_options", 00:16:04.914 "bdev_raid_remove_base_bdev", 00:16:04.914 "bdev_raid_add_base_bdev", 00:16:04.914 "bdev_raid_delete", 00:16:04.914 "bdev_raid_create", 00:16:04.914 "bdev_raid_get_bdevs", 00:16:04.914 "bdev_error_inject_error", 00:16:04.914 "bdev_error_delete", 00:16:04.914 "bdev_error_create", 00:16:04.914 "bdev_split_delete", 00:16:04.914 "bdev_split_create", 00:16:04.914 "bdev_delay_delete", 00:16:04.914 "bdev_delay_create", 00:16:04.914 "bdev_delay_update_latency", 00:16:04.914 "bdev_zone_block_delete", 00:16:04.914 "bdev_zone_block_create", 00:16:04.914 "blobfs_create", 00:16:04.914 "blobfs_detect", 00:16:04.914 "blobfs_set_cache_size", 00:16:04.914 "bdev_aio_delete", 00:16:04.914 "bdev_aio_rescan", 00:16:04.914 "bdev_aio_create", 00:16:04.914 "bdev_ftl_set_property", 00:16:04.914 "bdev_ftl_get_properties", 00:16:04.914 "bdev_ftl_get_stats", 00:16:04.914 "bdev_ftl_unmap", 00:16:04.914 "bdev_ftl_unload", 00:16:04.914 "bdev_ftl_delete", 00:16:04.914 "bdev_ftl_load", 00:16:04.914 "bdev_ftl_create", 00:16:04.914 "bdev_virtio_attach_controller", 00:16:04.914 "bdev_virtio_scsi_get_devices", 00:16:04.914 "bdev_virtio_detach_controller", 00:16:04.914 "bdev_virtio_blk_set_hotplug", 00:16:04.914 "bdev_iscsi_delete", 00:16:04.914 "bdev_iscsi_create", 00:16:04.914 "bdev_iscsi_set_options", 00:16:04.914 "accel_error_inject_error", 00:16:04.914 "ioat_scan_accel_module", 00:16:04.914 "dsa_scan_accel_module", 00:16:04.914 "iaa_scan_accel_module", 00:16:04.914 "keyring_file_remove_key", 00:16:04.914 "keyring_file_add_key", 00:16:04.914 "keyring_linux_set_options", 00:16:04.914 "fsdev_aio_delete", 00:16:04.914 "fsdev_aio_create", 00:16:04.914 "iscsi_get_histogram", 00:16:04.914 "iscsi_enable_histogram", 00:16:04.914 "iscsi_set_options", 00:16:04.914 "iscsi_get_auth_groups", 00:16:04.914 "iscsi_auth_group_remove_secret", 00:16:04.914 "iscsi_auth_group_add_secret", 00:16:04.914 "iscsi_delete_auth_group", 00:16:04.914 "iscsi_create_auth_group", 00:16:04.914 "iscsi_set_discovery_auth", 00:16:04.914 "iscsi_get_options", 00:16:04.914 "iscsi_target_node_request_logout", 00:16:04.914 "iscsi_target_node_set_redirect", 00:16:04.914 "iscsi_target_node_set_auth", 00:16:04.914 "iscsi_target_node_add_lun", 00:16:04.914 "iscsi_get_stats", 00:16:04.914 "iscsi_get_connections", 00:16:04.914 "iscsi_portal_group_set_auth", 00:16:04.915 "iscsi_start_portal_group", 00:16:04.915 "iscsi_delete_portal_group", 00:16:04.915 "iscsi_create_portal_group", 00:16:04.915 "iscsi_get_portal_groups", 00:16:04.915 "iscsi_delete_target_node", 00:16:04.915 "iscsi_target_node_remove_pg_ig_maps", 00:16:04.915 "iscsi_target_node_add_pg_ig_maps", 00:16:04.915 "iscsi_create_target_node", 00:16:04.915 "iscsi_get_target_nodes", 00:16:04.915 "iscsi_delete_initiator_group", 00:16:04.915 "iscsi_initiator_group_remove_initiators", 00:16:04.915 "iscsi_initiator_group_add_initiators", 00:16:04.915 "iscsi_create_initiator_group", 00:16:04.915 "iscsi_get_initiator_groups", 00:16:04.915 "nvmf_set_crdt", 00:16:04.915 "nvmf_set_config", 00:16:04.915 "nvmf_set_max_subsystems", 00:16:04.915 "nvmf_stop_mdns_prr", 00:16:04.915 "nvmf_publish_mdns_prr", 00:16:04.915 "nvmf_subsystem_get_listeners", 00:16:04.915 "nvmf_subsystem_get_qpairs", 00:16:04.915 "nvmf_subsystem_get_controllers", 00:16:04.915 "nvmf_get_stats", 00:16:04.915 "nvmf_get_transports", 00:16:04.915 "nvmf_create_transport", 00:16:04.915 "nvmf_get_targets", 00:16:04.915 "nvmf_delete_target", 00:16:04.915 "nvmf_create_target", 00:16:04.915 "nvmf_subsystem_allow_any_host", 00:16:04.915 "nvmf_subsystem_set_keys", 00:16:04.915 "nvmf_subsystem_remove_host", 00:16:04.915 "nvmf_subsystem_add_host", 00:16:04.915 "nvmf_ns_remove_host", 00:16:04.915 "nvmf_ns_add_host", 00:16:04.915 "nvmf_subsystem_remove_ns", 00:16:04.915 "nvmf_subsystem_set_ns_ana_group", 00:16:04.915 "nvmf_subsystem_add_ns", 00:16:04.915 "nvmf_subsystem_listener_set_ana_state", 00:16:04.915 "nvmf_discovery_get_referrals", 00:16:04.915 "nvmf_discovery_remove_referral", 00:16:04.915 "nvmf_discovery_add_referral", 00:16:04.915 "nvmf_subsystem_remove_listener", 00:16:04.915 "nvmf_subsystem_add_listener", 00:16:04.915 "nvmf_delete_subsystem", 00:16:04.915 "nvmf_create_subsystem", 00:16:04.915 "nvmf_get_subsystems", 00:16:04.915 "env_dpdk_get_mem_stats", 00:16:04.915 "nbd_get_disks", 00:16:04.915 "nbd_stop_disk", 00:16:04.915 "nbd_start_disk", 00:16:04.915 "ublk_recover_disk", 00:16:04.915 "ublk_get_disks", 00:16:04.915 "ublk_stop_disk", 00:16:04.915 "ublk_start_disk", 00:16:04.915 "ublk_destroy_target", 00:16:04.915 "ublk_create_target", 00:16:04.915 "virtio_blk_create_transport", 00:16:04.915 "virtio_blk_get_transports", 00:16:04.915 "vhost_controller_set_coalescing", 00:16:04.915 "vhost_get_controllers", 00:16:04.915 "vhost_delete_controller", 00:16:04.915 "vhost_create_blk_controller", 00:16:04.915 "vhost_scsi_controller_remove_target", 00:16:04.915 "vhost_scsi_controller_add_target", 00:16:04.915 "vhost_start_scsi_controller", 00:16:04.915 "vhost_create_scsi_controller", 00:16:04.915 "thread_set_cpumask", 00:16:04.915 "scheduler_set_options", 00:16:04.915 "framework_get_governor", 00:16:04.915 "framework_get_scheduler", 00:16:04.915 "framework_set_scheduler", 00:16:04.915 "framework_get_reactors", 00:16:04.915 "thread_get_io_channels", 00:16:04.915 "thread_get_pollers", 00:16:04.915 "thread_get_stats", 00:16:04.915 "framework_monitor_context_switch", 00:16:04.915 "spdk_kill_instance", 00:16:04.915 "log_enable_timestamps", 00:16:04.915 "log_get_flags", 00:16:04.915 "log_clear_flag", 00:16:04.915 "log_set_flag", 00:16:04.915 "log_get_level", 00:16:04.915 "log_set_level", 00:16:04.915 "log_get_print_level", 00:16:04.915 "log_set_print_level", 00:16:04.915 "framework_enable_cpumask_locks", 00:16:04.915 "framework_disable_cpumask_locks", 00:16:04.915 "framework_wait_init", 00:16:04.915 "framework_start_init", 00:16:04.915 "scsi_get_devices", 00:16:04.915 "bdev_get_histogram", 00:16:04.915 "bdev_enable_histogram", 00:16:04.915 "bdev_set_qos_limit", 00:16:04.915 "bdev_set_qd_sampling_period", 00:16:04.915 "bdev_get_bdevs", 00:16:04.915 "bdev_reset_iostat", 00:16:04.915 "bdev_get_iostat", 00:16:04.915 "bdev_examine", 00:16:04.915 "bdev_wait_for_examine", 00:16:04.915 "bdev_set_options", 00:16:04.915 "accel_get_stats", 00:16:04.915 "accel_set_options", 00:16:04.915 "accel_set_driver", 00:16:04.915 "accel_crypto_key_destroy", 00:16:04.915 "accel_crypto_keys_get", 00:16:04.915 "accel_crypto_key_create", 00:16:04.915 "accel_assign_opc", 00:16:04.915 "accel_get_module_info", 00:16:04.915 "accel_get_opc_assignments", 00:16:04.915 "vmd_rescan", 00:16:04.915 "vmd_remove_device", 00:16:04.915 "vmd_enable", 00:16:04.915 "sock_get_default_impl", 00:16:04.915 "sock_set_default_impl", 00:16:04.915 "sock_impl_set_options", 00:16:04.915 "sock_impl_get_options", 00:16:04.915 "iobuf_get_stats", 00:16:04.915 "iobuf_set_options", 00:16:04.915 "keyring_get_keys", 00:16:04.915 "framework_get_pci_devices", 00:16:04.915 "framework_get_config", 00:16:04.915 "framework_get_subsystems", 00:16:04.915 "fsdev_set_opts", 00:16:04.915 "fsdev_get_opts", 00:16:04.915 "trace_get_info", 00:16:04.915 "trace_get_tpoint_group_mask", 00:16:04.915 "trace_disable_tpoint_group", 00:16:04.915 "trace_enable_tpoint_group", 00:16:04.915 "trace_clear_tpoint_mask", 00:16:04.915 "trace_set_tpoint_mask", 00:16:04.915 "notify_get_notifications", 00:16:04.915 "notify_get_types", 00:16:04.915 "spdk_get_version", 00:16:04.915 "rpc_get_methods" 00:16:04.915 ] 00:16:04.915 07:37:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@733 -- # xtrace_disable 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.915 07:37:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:04.915 07:37:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57593 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' -z 57593 ']' 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # kill -0 57593 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # uname 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 57593 00:16:04.915 killing process with pid 57593 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@971 -- # echo 'killing process with pid 57593' 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # kill 57593 00:16:04.915 07:37:04 spdkcli_tcp -- common/autotest_common.sh@977 -- # wait 57593 00:16:08.200 ************************************ 00:16:08.200 END TEST spdkcli_tcp 00:16:08.200 ************************************ 00:16:08.200 00:16:08.200 real 0m5.378s 00:16:08.200 user 0m9.570s 00:16:08.200 sys 0m0.754s 00:16:08.200 07:37:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:08.200 07:37:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.200 07:37:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:08.200 07:37:07 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:08.200 07:37:07 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:08.200 07:37:07 -- common/autotest_common.sh@10 -- # set +x 00:16:08.200 ************************************ 00:16:08.200 START TEST dpdk_mem_utility 00:16:08.200 ************************************ 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:08.200 * Looking for test storage... 00:16:08.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # lcov --version 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.200 07:37:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.200 --rc genhtml_branch_coverage=1 00:16:08.200 --rc genhtml_function_coverage=1 00:16:08.200 --rc genhtml_legend=1 00:16:08.200 --rc geninfo_all_blocks=1 00:16:08.200 --rc geninfo_unexecuted_blocks=1 00:16:08.200 00:16:08.200 ' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.200 --rc genhtml_branch_coverage=1 00:16:08.200 --rc genhtml_function_coverage=1 00:16:08.200 --rc genhtml_legend=1 00:16:08.200 --rc geninfo_all_blocks=1 00:16:08.200 --rc geninfo_unexecuted_blocks=1 00:16:08.200 00:16:08.200 ' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.200 --rc genhtml_branch_coverage=1 00:16:08.200 --rc genhtml_function_coverage=1 00:16:08.200 --rc genhtml_legend=1 00:16:08.200 --rc geninfo_all_blocks=1 00:16:08.200 --rc geninfo_unexecuted_blocks=1 00:16:08.200 00:16:08.200 ' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:08.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.200 --rc genhtml_branch_coverage=1 00:16:08.200 --rc genhtml_function_coverage=1 00:16:08.200 --rc genhtml_legend=1 00:16:08.200 --rc geninfo_all_blocks=1 00:16:08.200 --rc geninfo_unexecuted_blocks=1 00:16:08.200 00:16:08.200 ' 00:16:08.200 07:37:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:08.200 07:37:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57732 00:16:08.200 07:37:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.200 07:37:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57732 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # '[' -z 57732 ']' 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:08.200 07:37:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:08.459 [2024-10-07 07:37:07.779364] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:08.459 [2024-10-07 07:37:07.779884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57732 ] 00:16:08.459 [2024-10-07 07:37:07.970255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.717 [2024-10-07 07:37:08.236787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.093 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:10.093 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@867 -- # return 0 00:16:10.093 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:10.093 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:10.093 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:10.093 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:10.093 { 00:16:10.093 "filename": "/tmp/spdk_mem_dump.txt" 00:16:10.093 } 00:16:10.093 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:10.093 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:10.093 DPDK memory size 866.000000 MiB in 1 heap(s) 00:16:10.093 1 heaps totaling size 866.000000 MiB 00:16:10.093 size: 866.000000 MiB heap id: 0 00:16:10.093 end heaps---------- 00:16:10.093 9 mempools totaling size 642.649841 MiB 00:16:10.093 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:10.093 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:10.093 size: 92.545471 MiB name: bdev_io_57732 00:16:10.093 size: 51.011292 MiB name: evtpool_57732 00:16:10.093 size: 50.003479 MiB name: msgpool_57732 00:16:10.093 size: 36.509338 MiB name: fsdev_io_57732 00:16:10.093 size: 21.763794 MiB name: PDU_Pool 00:16:10.093 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:10.093 size: 0.026123 MiB name: Session_Pool 00:16:10.093 end mempools------- 00:16:10.093 6 memzones totaling size 4.142822 MiB 00:16:10.093 size: 1.000366 MiB name: RG_ring_0_57732 00:16:10.093 size: 1.000366 MiB name: RG_ring_1_57732 00:16:10.093 size: 1.000366 MiB name: RG_ring_4_57732 00:16:10.093 size: 1.000366 MiB name: RG_ring_5_57732 00:16:10.093 size: 0.125366 MiB name: RG_ring_2_57732 00:16:10.093 size: 0.015991 MiB name: RG_ring_3_57732 00:16:10.093 end memzones------- 00:16:10.093 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:10.093 heap id: 0 total size: 866.000000 MiB number of busy elements: 269 number of free elements: 19 00:16:10.093 list of free elements. size: 19.924805 MiB 00:16:10.093 element at address: 0x200000400000 with size: 1.999451 MiB 00:16:10.093 element at address: 0x200000800000 with size: 1.996887 MiB 00:16:10.093 element at address: 0x200009600000 with size: 1.995972 MiB 00:16:10.093 element at address: 0x20000d800000 with size: 1.995972 MiB 00:16:10.093 element at address: 0x200007000000 with size: 1.991028 MiB 00:16:10.093 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:16:10.093 element at address: 0x20001c300040 with size: 0.999939 MiB 00:16:10.093 element at address: 0x20001c400000 with size: 0.999084 MiB 00:16:10.093 element at address: 0x200035000000 with size: 0.994324 MiB 00:16:10.093 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:16:10.093 element at address: 0x20001c700040 with size: 0.936401 MiB 00:16:10.093 element at address: 0x200000200000 with size: 0.834839 MiB 00:16:10.093 element at address: 0x20001de00000 with size: 0.567322 MiB 00:16:10.094 element at address: 0x200003e00000 with size: 0.490173 MiB 00:16:10.094 element at address: 0x20001c000000 with size: 0.489441 MiB 00:16:10.094 element at address: 0x20001c800000 with size: 0.485413 MiB 00:16:10.094 element at address: 0x200015e00000 with size: 0.443237 MiB 00:16:10.094 element at address: 0x20002b200000 with size: 0.390442 MiB 00:16:10.094 element at address: 0x200003a00000 with size: 0.355286 MiB 00:16:10.094 list of standard malloc elements. size: 199.276489 MiB 00:16:10.094 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:16:10.094 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:16:10.094 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:16:10.094 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:16:10.094 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:16:10.094 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:16:10.094 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:16:10.094 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:16:10.094 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:16:10.094 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:16:10.094 element at address: 0x200015dff040 with size: 0.000305 MiB 00:16:10.094 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003aff800 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003affa80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003efef00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200003eff000 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff180 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff280 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff380 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff480 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff580 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff680 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff780 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff880 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dff980 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71780 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71880 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71980 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e72080 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015e72180 with size: 0.000244 MiB 00:16:10.094 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:16:10.094 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b264040 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:16:10.095 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:16:10.095 list of memzone associated elements. size: 646.798706 MiB 00:16:10.095 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:16:10.095 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:10.095 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:16:10.095 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:10.095 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:16:10.095 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57732_0 00:16:10.095 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:16:10.095 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57732_0 00:16:10.095 element at address: 0x200003fff340 with size: 48.003113 MiB 00:16:10.095 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57732_0 00:16:10.095 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:16:10.095 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57732_0 00:16:10.095 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:16:10.095 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:10.095 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:16:10.095 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:10.095 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:16:10.095 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57732 00:16:10.095 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:16:10.095 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57732 00:16:10.095 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:16:10.095 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57732 00:16:10.095 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:16:10.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:10.095 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:16:10.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:10.095 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:16:10.095 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:10.095 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:16:10.095 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:10.096 element at address: 0x200003eff100 with size: 1.000549 MiB 00:16:10.096 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57732 00:16:10.096 element at address: 0x200003affb80 with size: 1.000549 MiB 00:16:10.096 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57732 00:16:10.096 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:16:10.096 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57732 00:16:10.096 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:16:10.096 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57732 00:16:10.096 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:16:10.096 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57732 00:16:10.096 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:16:10.096 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57732 00:16:10.096 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:16:10.096 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:10.096 element at address: 0x200015e72280 with size: 0.500549 MiB 00:16:10.096 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:10.096 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:16:10.096 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:10.096 element at address: 0x200003a5f180 with size: 0.125549 MiB 00:16:10.096 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57732 00:16:10.096 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:16:10.096 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:10.096 element at address: 0x20002b264140 with size: 0.023804 MiB 00:16:10.096 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:10.096 element at address: 0x200003a5af40 with size: 0.016174 MiB 00:16:10.096 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57732 00:16:10.096 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:16:10.096 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:10.096 element at address: 0x2000002d6280 with size: 0.000366 MiB 00:16:10.096 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57732 00:16:10.096 element at address: 0x200003aff900 with size: 0.000366 MiB 00:16:10.096 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57732 00:16:10.096 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:16:10.096 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57732 00:16:10.096 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:16:10.096 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:10.096 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:10.096 07:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57732 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' -z 57732 ']' 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@957 -- # kill -0 57732 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # uname 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 57732 00:16:10.096 killing process with pid 57732 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@971 -- # echo 'killing process with pid 57732' 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # kill 57732 00:16:10.096 07:37:09 dpdk_mem_utility -- common/autotest_common.sh@977 -- # wait 57732 00:16:13.463 00:16:13.463 real 0m5.069s 00:16:13.463 user 0m5.004s 00:16:13.463 sys 0m0.665s 00:16:13.463 07:37:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:13.463 ************************************ 00:16:13.463 END TEST dpdk_mem_utility 00:16:13.463 ************************************ 00:16:13.463 07:37:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:13.463 07:37:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:13.463 07:37:12 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:13.463 07:37:12 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:13.463 07:37:12 -- common/autotest_common.sh@10 -- # set +x 00:16:13.463 ************************************ 00:16:13.463 START TEST event 00:16:13.463 ************************************ 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:13.463 * Looking for test storage... 00:16:13.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1626 -- # lcov --version 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:13.463 07:37:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.463 07:37:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.463 07:37:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.463 07:37:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.463 07:37:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.463 07:37:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.463 07:37:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.463 07:37:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.463 07:37:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.463 07:37:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.463 07:37:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.463 07:37:12 event -- scripts/common.sh@344 -- # case "$op" in 00:16:13.463 07:37:12 event -- scripts/common.sh@345 -- # : 1 00:16:13.463 07:37:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.463 07:37:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.463 07:37:12 event -- scripts/common.sh@365 -- # decimal 1 00:16:13.463 07:37:12 event -- scripts/common.sh@353 -- # local d=1 00:16:13.463 07:37:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.463 07:37:12 event -- scripts/common.sh@355 -- # echo 1 00:16:13.463 07:37:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.463 07:37:12 event -- scripts/common.sh@366 -- # decimal 2 00:16:13.463 07:37:12 event -- scripts/common.sh@353 -- # local d=2 00:16:13.463 07:37:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.463 07:37:12 event -- scripts/common.sh@355 -- # echo 2 00:16:13.463 07:37:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.463 07:37:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.463 07:37:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.463 07:37:12 event -- scripts/common.sh@368 -- # return 0 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.463 07:37:12 event -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.464 --rc genhtml_branch_coverage=1 00:16:13.464 --rc genhtml_function_coverage=1 00:16:13.464 --rc genhtml_legend=1 00:16:13.464 --rc geninfo_all_blocks=1 00:16:13.464 --rc geninfo_unexecuted_blocks=1 00:16:13.464 00:16:13.464 ' 00:16:13.464 07:37:12 event -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.464 --rc genhtml_branch_coverage=1 00:16:13.464 --rc genhtml_function_coverage=1 00:16:13.464 --rc genhtml_legend=1 00:16:13.464 --rc geninfo_all_blocks=1 00:16:13.464 --rc geninfo_unexecuted_blocks=1 00:16:13.464 00:16:13.464 ' 00:16:13.464 07:37:12 event -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.464 --rc genhtml_branch_coverage=1 00:16:13.464 --rc genhtml_function_coverage=1 00:16:13.464 --rc genhtml_legend=1 00:16:13.464 --rc geninfo_all_blocks=1 00:16:13.464 --rc geninfo_unexecuted_blocks=1 00:16:13.464 00:16:13.464 ' 00:16:13.464 07:37:12 event -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.464 --rc genhtml_branch_coverage=1 00:16:13.464 --rc genhtml_function_coverage=1 00:16:13.464 --rc genhtml_legend=1 00:16:13.464 --rc geninfo_all_blocks=1 00:16:13.464 --rc geninfo_unexecuted_blocks=1 00:16:13.464 00:16:13.464 ' 00:16:13.464 07:37:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:13.464 07:37:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:13.464 07:37:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:13.464 07:37:12 event -- common/autotest_common.sh@1104 -- # '[' 6 -le 1 ']' 00:16:13.464 07:37:12 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:13.464 07:37:12 event -- common/autotest_common.sh@10 -- # set +x 00:16:13.464 ************************************ 00:16:13.464 START TEST event_perf 00:16:13.464 ************************************ 00:16:13.464 07:37:12 event.event_perf -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:13.464 Running I/O for 1 seconds...[2024-10-07 07:37:12.820550] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:13.464 [2024-10-07 07:37:12.820970] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:16:13.464 [2024-10-07 07:37:13.004460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.031 [2024-10-07 07:37:13.285909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.031 [2024-10-07 07:37:13.286049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.031 [2024-10-07 07:37:13.286107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.031 Running I/O for 1 seconds...[2024-10-07 07:37:13.286109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.409 00:16:15.409 lcore 0: 174177 00:16:15.409 lcore 1: 174176 00:16:15.409 lcore 2: 174176 00:16:15.409 lcore 3: 174176 00:16:15.409 done. 00:16:15.409 00:16:15.409 real 0m1.991s 00:16:15.409 user 0m4.693s 00:16:15.409 sys 0m0.164s 00:16:15.409 07:37:14 event.event_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:15.409 07:37:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:15.409 ************************************ 00:16:15.409 END TEST event_perf 00:16:15.409 ************************************ 00:16:15.409 07:37:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:15.409 07:37:14 event -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:16:15.409 07:37:14 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:15.409 07:37:14 event -- common/autotest_common.sh@10 -- # set +x 00:16:15.409 ************************************ 00:16:15.409 START TEST event_reactor 00:16:15.409 ************************************ 00:16:15.409 07:37:14 event.event_reactor -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:15.409 [2024-10-07 07:37:14.871131] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:15.409 [2024-10-07 07:37:14.871629] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57902 ] 00:16:15.668 [2024-10-07 07:37:15.073579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.927 [2024-10-07 07:37:15.391064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.328 test_start 00:16:17.328 oneshot 00:16:17.328 tick 100 00:16:17.328 tick 100 00:16:17.328 tick 250 00:16:17.328 tick 100 00:16:17.328 tick 100 00:16:17.328 tick 100 00:16:17.328 tick 250 00:16:17.328 tick 500 00:16:17.328 tick 100 00:16:17.328 tick 100 00:16:17.328 tick 250 00:16:17.328 tick 100 00:16:17.328 tick 100 00:16:17.328 test_end 00:16:17.328 00:16:17.328 real 0m2.044s 00:16:17.328 user 0m1.792s 00:16:17.328 sys 0m0.137s 00:16:17.328 ************************************ 00:16:17.328 END TEST event_reactor 00:16:17.328 ************************************ 00:16:17.328 07:37:16 event.event_reactor -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:17.328 07:37:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:17.587 07:37:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:17.587 07:37:16 event -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:16:17.587 07:37:16 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:17.587 07:37:16 event -- common/autotest_common.sh@10 -- # set +x 00:16:17.587 ************************************ 00:16:17.587 START TEST event_reactor_perf 00:16:17.587 ************************************ 00:16:17.587 07:37:16 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:17.587 [2024-10-07 07:37:16.956022] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:17.587 [2024-10-07 07:37:16.956251] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:16:17.587 [2024-10-07 07:37:17.127349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.846 [2024-10-07 07:37:17.380934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.747 test_start 00:16:19.747 test_end 00:16:19.747 Performance: 291916 events per second 00:16:19.747 00:16:19.747 real 0m1.945s 00:16:19.747 user 0m1.702s 00:16:19.747 sys 0m0.129s 00:16:19.747 ************************************ 00:16:19.747 END TEST event_reactor_perf 00:16:19.747 ************************************ 00:16:19.747 07:37:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:19.747 07:37:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:19.747 07:37:18 event -- event/event.sh@49 -- # uname -s 00:16:19.747 07:37:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:19.747 07:37:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:19.747 07:37:18 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:19.747 07:37:18 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:19.747 07:37:18 event -- common/autotest_common.sh@10 -- # set +x 00:16:19.747 ************************************ 00:16:19.747 START TEST event_scheduler 00:16:19.747 ************************************ 00:16:19.747 07:37:18 event.event_scheduler -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:19.747 * Looking for test storage... 00:16:19.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:19.747 07:37:19 event.event_scheduler -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:19.747 07:37:19 event.event_scheduler -- common/autotest_common.sh@1626 -- # lcov --version 00:16:19.747 07:37:19 event.event_scheduler -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:19.747 07:37:19 event.event_scheduler -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:19.747 07:37:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.747 07:37:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.747 07:37:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.748 07:37:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:19.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.748 --rc genhtml_branch_coverage=1 00:16:19.748 --rc genhtml_function_coverage=1 00:16:19.748 --rc genhtml_legend=1 00:16:19.748 --rc geninfo_all_blocks=1 00:16:19.748 --rc geninfo_unexecuted_blocks=1 00:16:19.748 00:16:19.748 ' 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:19.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.748 --rc genhtml_branch_coverage=1 00:16:19.748 --rc genhtml_function_coverage=1 00:16:19.748 --rc genhtml_legend=1 00:16:19.748 --rc geninfo_all_blocks=1 00:16:19.748 --rc geninfo_unexecuted_blocks=1 00:16:19.748 00:16:19.748 ' 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:19.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.748 --rc genhtml_branch_coverage=1 00:16:19.748 --rc genhtml_function_coverage=1 00:16:19.748 --rc genhtml_legend=1 00:16:19.748 --rc geninfo_all_blocks=1 00:16:19.748 --rc geninfo_unexecuted_blocks=1 00:16:19.748 00:16:19.748 ' 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:19.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.748 --rc genhtml_branch_coverage=1 00:16:19.748 --rc genhtml_function_coverage=1 00:16:19.748 --rc genhtml_legend=1 00:16:19.748 --rc geninfo_all_blocks=1 00:16:19.748 --rc geninfo_unexecuted_blocks=1 00:16:19.748 00:16:19.748 ' 00:16:19.748 07:37:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:19.748 07:37:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58026 00:16:19.748 07:37:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:19.748 07:37:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:19.748 07:37:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58026 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@834 -- # '[' -z 58026 ']' 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:19.748 07:37:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:19.748 [2024-10-07 07:37:19.258307] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:19.748 [2024-10-07 07:37:19.258731] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:16:20.005 [2024-10-07 07:37:19.447403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.262 [2024-10-07 07:37:19.758415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.262 [2024-10-07 07:37:19.758628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.262 [2024-10-07 07:37:19.758735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.262 [2024-10-07 07:37:19.759486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@867 -- # return 0 00:16:20.826 07:37:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.826 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.826 POWER: Cannot set governor of lcore 0 to performance 00:16:20.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.826 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.826 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.826 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:16:20.826 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:20.826 POWER: Unable to set Power Management Environment for lcore 0 00:16:20.826 [2024-10-07 07:37:20.268822] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:16:20.826 [2024-10-07 07:37:20.268860] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:16:20.826 [2024-10-07 07:37:20.268875] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:16:20.826 [2024-10-07 07:37:20.268902] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:16:20.826 [2024-10-07 07:37:20.268914] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:16:20.826 [2024-10-07 07:37:20.268928] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:20.826 07:37:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:20.826 07:37:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:21.084 [2024-10-07 07:37:20.637056] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:21.084 07:37:20 event.event_scheduler -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.084 07:37:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:21.084 07:37:20 event.event_scheduler -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:21.084 07:37:20 event.event_scheduler -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:21.084 07:37:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 ************************************ 00:16:21.341 START TEST scheduler_create_thread 00:16:21.341 ************************************ 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # scheduler_create_thread 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 2 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 3 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 4 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 5 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 6 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 7 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 8 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.341 9 00:16:21.341 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:21.342 10 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:21.342 07:37:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:22.787 07:37:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:22.787 07:37:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:22.787 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:22.787 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:23.721 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:23.721 07:37:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:23.721 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:23.721 07:37:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:24.288 07:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:24.288 07:37:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:24.288 07:37:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:24.288 07:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:24.288 07:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:25.224 07:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:25.224 00:16:25.224 real 0m3.891s 00:16:25.224 user 0m0.018s 00:16:25.224 sys 0m0.009s 00:16:25.224 07:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:25.224 07:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:25.224 ************************************ 00:16:25.224 END TEST scheduler_create_thread 00:16:25.224 ************************************ 00:16:25.224 07:37:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:25.224 07:37:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58026 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' -z 58026 ']' 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@957 -- # kill -0 58026 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@958 -- # uname 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58026 00:16:25.224 07:37:24 event.event_scheduler -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:16:25.225 07:37:24 event.event_scheduler -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:16:25.225 07:37:24 event.event_scheduler -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58026' 00:16:25.225 killing process with pid 58026 00:16:25.225 07:37:24 event.event_scheduler -- common/autotest_common.sh@972 -- # kill 58026 00:16:25.225 07:37:24 event.event_scheduler -- common/autotest_common.sh@977 -- # wait 58026 00:16:25.483 [2024-10-07 07:37:24.924350] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:27.382 00:16:27.382 real 0m7.575s 00:16:27.382 user 0m15.214s 00:16:27.382 sys 0m0.589s 00:16:27.382 07:37:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:27.382 07:37:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 ************************************ 00:16:27.382 END TEST event_scheduler 00:16:27.382 ************************************ 00:16:27.382 07:37:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:16:27.382 07:37:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:27.382 07:37:26 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:27.382 07:37:26 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:27.382 07:37:26 event -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 ************************************ 00:16:27.382 START TEST app_repeat 00:16:27.382 ************************************ 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@1128 -- # app_repeat_test 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58159 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:27.382 Process app_repeat pid: 58159 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58159' 00:16:27.382 spdk_app_start Round 0 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:27.382 07:37:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 58159 ']' 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:27.382 07:37:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 [2024-10-07 07:37:26.608170] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:27.382 [2024-10-07 07:37:26.608376] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:16:27.382 [2024-10-07 07:37:26.795993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.640 [2024-10-07 07:37:27.071782] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.640 [2024-10-07 07:37:27.071790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.205 07:37:27 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:28.205 07:37:27 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:16:28.205 07:37:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:28.798 Malloc0 00:16:28.798 07:37:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:29.056 Malloc1 00:16:29.056 07:37:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.056 07:37:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:29.314 /dev/nbd0 00:16:29.314 07:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:29.314 07:37:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:29.314 1+0 records in 00:16:29.314 1+0 records out 00:16:29.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295015 s, 13.9 MB/s 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:29.314 07:37:28 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:29.314 07:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.314 07:37:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.314 07:37:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:29.880 /dev/nbd1 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:29.880 1+0 records in 00:16:29.880 1+0 records out 00:16:29.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356304 s, 11.5 MB/s 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:29.880 07:37:29 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.880 07:37:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:30.137 { 00:16:30.137 "nbd_device": "/dev/nbd0", 00:16:30.137 "bdev_name": "Malloc0" 00:16:30.137 }, 00:16:30.137 { 00:16:30.137 "nbd_device": "/dev/nbd1", 00:16:30.137 "bdev_name": "Malloc1" 00:16:30.137 } 00:16:30.137 ]' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:30.137 { 00:16:30.137 "nbd_device": "/dev/nbd0", 00:16:30.137 "bdev_name": "Malloc0" 00:16:30.137 }, 00:16:30.137 { 00:16:30.137 "nbd_device": "/dev/nbd1", 00:16:30.137 "bdev_name": "Malloc1" 00:16:30.137 } 00:16:30.137 ]' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:30.137 /dev/nbd1' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:30.137 /dev/nbd1' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:30.137 256+0 records in 00:16:30.137 256+0 records out 00:16:30.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719959 s, 146 MB/s 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:30.137 256+0 records in 00:16:30.137 256+0 records out 00:16:30.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244523 s, 42.9 MB/s 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:30.137 256+0 records in 00:16:30.137 256+0 records out 00:16:30.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302437 s, 34.7 MB/s 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:30.137 07:37:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.138 07:37:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.396 07:37:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:30.963 07:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:31.221 07:37:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:31.221 07:37:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:31.787 07:37:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:33.234 [2024-10-07 07:37:32.723033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.492 [2024-10-07 07:37:32.969219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.492 [2024-10-07 07:37:32.969221] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.751 [2024-10-07 07:37:33.210860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:33.751 [2024-10-07 07:37:33.210964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:34.685 spdk_app_start Round 1 00:16:34.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:34.685 07:37:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:34.685 07:37:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:34.685 07:37:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 58159 ']' 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:34.685 07:37:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:34.944 07:37:34 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:34.944 07:37:34 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:16:34.944 07:37:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:35.203 Malloc0 00:16:35.203 07:37:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:35.772 Malloc1 00:16:35.772 07:37:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:35.772 /dev/nbd0 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:35.772 1+0 records in 00:16:35.772 1+0 records out 00:16:35.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368317 s, 11.1 MB/s 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:35.772 07:37:35 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.772 07:37:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:36.031 /dev/nbd1 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:36.290 1+0 records in 00:16:36.290 1+0 records out 00:16:36.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355868 s, 11.5 MB/s 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:36.290 07:37:35 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:36.290 07:37:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:36.550 { 00:16:36.550 "nbd_device": "/dev/nbd0", 00:16:36.550 "bdev_name": "Malloc0" 00:16:36.550 }, 00:16:36.550 { 00:16:36.550 "nbd_device": "/dev/nbd1", 00:16:36.550 "bdev_name": "Malloc1" 00:16:36.550 } 00:16:36.550 ]' 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:36.550 { 00:16:36.550 "nbd_device": "/dev/nbd0", 00:16:36.550 "bdev_name": "Malloc0" 00:16:36.550 }, 00:16:36.550 { 00:16:36.550 "nbd_device": "/dev/nbd1", 00:16:36.550 "bdev_name": "Malloc1" 00:16:36.550 } 00:16:36.550 ]' 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:36.550 /dev/nbd1' 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:36.550 /dev/nbd1' 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:36.550 07:37:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:36.550 256+0 records in 00:16:36.550 256+0 records out 00:16:36.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714028 s, 147 MB/s 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:36.550 256+0 records in 00:16:36.550 256+0 records out 00:16:36.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342296 s, 30.6 MB/s 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:36.550 256+0 records in 00:16:36.550 256+0 records out 00:16:36.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375558 s, 27.9 MB/s 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:36.550 07:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.809 07:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:37.096 07:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:37.662 07:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:37.663 07:37:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:37.663 07:37:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:37.921 07:37:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:39.824 [2024-10-07 07:37:38.906022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.824 [2024-10-07 07:37:39.163732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.824 [2024-10-07 07:37:39.163772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.083 [2024-10-07 07:37:39.387690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:40.083 [2024-10-07 07:37:39.387809] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:41.017 spdk_app_start Round 2 00:16:41.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:41.017 07:37:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:41.017 07:37:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:41.017 07:37:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 58159 ']' 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:41.017 07:37:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:41.276 07:37:40 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:41.276 07:37:40 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:16:41.276 07:37:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:41.842 Malloc0 00:16:41.842 07:37:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:42.100 Malloc1 00:16:42.100 07:37:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.100 07:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.101 07:37:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:42.358 /dev/nbd0 00:16:42.358 07:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:42.358 07:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:42.358 07:37:41 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:16:42.358 07:37:41 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:42.358 07:37:41 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:42.359 1+0 records in 00:16:42.359 1+0 records out 00:16:42.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338465 s, 12.1 MB/s 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:42.359 07:37:41 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:42.359 07:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.359 07:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.359 07:37:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:42.617 /dev/nbd1 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@872 -- # local i 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@876 -- # break 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:42.617 1+0 records in 00:16:42.617 1+0 records out 00:16:42.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492849 s, 8.3 MB/s 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@889 -- # size=4096 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:16:42.617 07:37:42 event.app_repeat -- common/autotest_common.sh@892 -- # return 0 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:42.617 07:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:42.876 { 00:16:42.876 "nbd_device": "/dev/nbd0", 00:16:42.876 "bdev_name": "Malloc0" 00:16:42.876 }, 00:16:42.876 { 00:16:42.876 "nbd_device": "/dev/nbd1", 00:16:42.876 "bdev_name": "Malloc1" 00:16:42.876 } 00:16:42.876 ]' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:42.876 { 00:16:42.876 "nbd_device": "/dev/nbd0", 00:16:42.876 "bdev_name": "Malloc0" 00:16:42.876 }, 00:16:42.876 { 00:16:42.876 "nbd_device": "/dev/nbd1", 00:16:42.876 "bdev_name": "Malloc1" 00:16:42.876 } 00:16:42.876 ]' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:42.876 /dev/nbd1' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:42.876 /dev/nbd1' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:42.876 07:37:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:43.135 256+0 records in 00:16:43.135 256+0 records out 00:16:43.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113435 s, 92.4 MB/s 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:43.135 256+0 records in 00:16:43.135 256+0 records out 00:16:43.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279775 s, 37.5 MB/s 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:43.135 256+0 records in 00:16:43.135 256+0 records out 00:16:43.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0421781 s, 24.9 MB/s 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.135 07:37:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:43.428 07:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.428 07:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.428 07:37:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.429 07:37:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:43.689 07:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:43.690 07:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:43.949 07:37:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:43.949 07:37:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:44.516 07:37:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:46.420 [2024-10-07 07:37:45.506817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.420 [2024-10-07 07:37:45.745751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.420 [2024-10-07 07:37:45.745758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.420 [2024-10-07 07:37:45.970187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:46.420 [2024-10-07 07:37:45.970295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:47.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:47.471 07:37:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58159 /var/tmp/spdk-nbd.sock 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@834 -- # '[' -z 58159 ']' 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:47.471 07:37:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@867 -- # return 0 00:16:48.037 07:37:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58159 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@953 -- # '[' -z 58159 ']' 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@957 -- # kill -0 58159 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@958 -- # uname 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58159 00:16:48.037 killing process with pid 58159 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58159' 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@972 -- # kill 58159 00:16:48.037 07:37:47 event.app_repeat -- common/autotest_common.sh@977 -- # wait 58159 00:16:49.413 spdk_app_start is called in Round 0. 00:16:49.413 Shutdown signal received, stop current app iteration 00:16:49.413 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:16:49.413 spdk_app_start is called in Round 1. 00:16:49.413 Shutdown signal received, stop current app iteration 00:16:49.413 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:16:49.413 spdk_app_start is called in Round 2. 00:16:49.413 Shutdown signal received, stop current app iteration 00:16:49.413 Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 reinitialization... 00:16:49.413 spdk_app_start is called in Round 3. 00:16:49.413 Shutdown signal received, stop current app iteration 00:16:49.413 07:37:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:49.413 07:37:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:49.413 00:16:49.413 real 0m22.237s 00:16:49.413 user 0m46.904s 00:16:49.413 sys 0m3.735s 00:16:49.413 07:37:48 event.app_repeat -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:49.413 ************************************ 00:16:49.413 END TEST app_repeat 00:16:49.413 ************************************ 00:16:49.413 07:37:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 07:37:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:49.413 07:37:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:49.413 07:37:48 event -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:49.413 07:37:48 event -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:49.413 07:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:16:49.413 ************************************ 00:16:49.413 START TEST cpu_locks 00:16:49.413 ************************************ 00:16:49.413 07:37:48 event.cpu_locks -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:49.413 * Looking for test storage... 00:16:49.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:49.413 07:37:48 event.cpu_locks -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:16:49.413 07:37:48 event.cpu_locks -- common/autotest_common.sh@1626 -- # lcov --version 00:16:49.413 07:37:48 event.cpu_locks -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:16:49.673 07:37:49 event.cpu_locks -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.673 07:37:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:16:49.673 07:37:49 event.cpu_locks -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.673 07:37:49 event.cpu_locks -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:16:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.673 --rc genhtml_branch_coverage=1 00:16:49.673 --rc genhtml_function_coverage=1 00:16:49.673 --rc genhtml_legend=1 00:16:49.673 --rc geninfo_all_blocks=1 00:16:49.673 --rc geninfo_unexecuted_blocks=1 00:16:49.673 00:16:49.673 ' 00:16:49.673 07:37:49 event.cpu_locks -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:16:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.673 --rc genhtml_branch_coverage=1 00:16:49.673 --rc genhtml_function_coverage=1 00:16:49.673 --rc genhtml_legend=1 00:16:49.673 --rc geninfo_all_blocks=1 00:16:49.673 --rc geninfo_unexecuted_blocks=1 00:16:49.673 00:16:49.673 ' 00:16:49.673 07:37:49 event.cpu_locks -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:16:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.673 --rc genhtml_branch_coverage=1 00:16:49.673 --rc genhtml_function_coverage=1 00:16:49.673 --rc genhtml_legend=1 00:16:49.673 --rc geninfo_all_blocks=1 00:16:49.674 --rc geninfo_unexecuted_blocks=1 00:16:49.674 00:16:49.674 ' 00:16:49.674 07:37:49 event.cpu_locks -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:16:49.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.674 --rc genhtml_branch_coverage=1 00:16:49.674 --rc genhtml_function_coverage=1 00:16:49.674 --rc genhtml_legend=1 00:16:49.674 --rc geninfo_all_blocks=1 00:16:49.674 --rc geninfo_unexecuted_blocks=1 00:16:49.674 00:16:49.674 ' 00:16:49.674 07:37:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:49.674 07:37:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:49.674 07:37:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:49.674 07:37:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:49.674 07:37:49 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:49.674 07:37:49 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:49.674 07:37:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 ************************************ 00:16:49.674 START TEST default_locks 00:16:49.674 ************************************ 00:16:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # default_locks 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58642 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58642 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # '[' -z 58642 ']' 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:49.674 07:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 [2024-10-07 07:37:49.224684] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:49.674 [2024-10-07 07:37:49.225187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:16:49.933 [2024-10-07 07:37:49.410267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.191 [2024-10-07 07:37:49.669625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.564 07:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:51.564 07:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # return 0 00:16:51.564 07:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58642 00:16:51.564 07:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:51.565 07:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58642 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58642 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' -z 58642 ']' 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # kill -0 58642 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # uname 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58642 00:16:51.823 killing process with pid 58642 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58642' 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # kill 58642 00:16:51.823 07:37:51 event.cpu_locks.default_locks -- common/autotest_common.sh@977 -- # wait 58642 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58642 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # local es=0 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 58642 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:16:55.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.111 ERROR: process (pid: 58642) is no longer running 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@656 -- # waitforlisten 58642 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # '[' -z 58642 ']' 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:55.111 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 849: kill: (58642) - No such process 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # return 1 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@656 -- # es=1 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:55.111 00:16:55.111 real 0m5.224s 00:16:55.111 user 0m5.238s 00:16:55.111 sys 0m0.825s 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:16:55.111 07:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:55.111 ************************************ 00:16:55.111 END TEST default_locks 00:16:55.111 ************************************ 00:16:55.111 07:37:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:55.111 07:37:54 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:16:55.111 07:37:54 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:16:55.111 07:37:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:55.111 ************************************ 00:16:55.111 START TEST default_locks_via_rpc 00:16:55.111 ************************************ 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # default_locks_via_rpc 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58729 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58729 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 58729 ']' 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:16:55.111 07:37:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.111 [2024-10-07 07:37:54.477230] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:16:55.111 [2024-10-07 07:37:54.477377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58729 ] 00:16:55.111 [2024-10-07 07:37:54.643264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.369 [2024-10-07 07:37:54.914689] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58729 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:56.742 07:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58729 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58729 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' -z 58729 ']' 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # kill -0 58729 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # uname 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:16:57.001 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58729 00:16:57.258 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:16:57.258 killing process with pid 58729 00:16:57.258 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:16:57.258 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58729' 00:16:57.258 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # kill 58729 00:16:57.258 07:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@977 -- # wait 58729 00:17:00.553 00:17:00.553 real 0m5.118s 00:17:00.553 user 0m5.245s 00:17:00.553 sys 0m0.824s 00:17:00.553 ************************************ 00:17:00.553 07:37:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:00.553 07:37:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.553 END TEST default_locks_via_rpc 00:17:00.553 ************************************ 00:17:00.553 07:37:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:00.553 07:37:59 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:00.553 07:37:59 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:00.553 07:37:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:00.553 ************************************ 00:17:00.553 START TEST non_locking_app_on_locked_coremask 00:17:00.553 ************************************ 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # non_locking_app_on_locked_coremask 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58818 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58818 /var/tmp/spdk.sock 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 58818 ']' 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:00.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:00.553 07:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:00.553 [2024-10-07 07:37:59.704405] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:00.553 [2024-10-07 07:37:59.704663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58818 ] 00:17:00.553 [2024-10-07 07:37:59.918031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.811 [2024-10-07 07:38:00.212729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58840 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58840 /var/tmp/spdk2.sock 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 58840 ']' 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:01.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:01.747 07:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:02.005 [2024-10-07 07:38:01.310558] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:02.005 [2024-10-07 07:38:01.310760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58840 ] 00:17:02.005 [2024-10-07 07:38:01.510397] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:02.005 [2024-10-07 07:38:01.510477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.570 [2024-10-07 07:38:02.012690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.101 07:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:05.101 07:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:05.101 07:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58818 00:17:05.101 07:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:05.101 07:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58818 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58818 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 58818 ']' 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 58818 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:05.670 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58818 00:17:05.929 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:05.929 killing process with pid 58818 00:17:05.929 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:05.929 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58818' 00:17:05.929 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 58818 00:17:05.929 07:38:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 58818 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58840 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 58840 ']' 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 58840 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 58840 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:12.492 killing process with pid 58840 00:17:12.492 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:12.493 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 58840' 00:17:12.493 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 58840 00:17:12.493 07:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 58840 00:17:15.027 00:17:15.027 real 0m14.593s 00:17:15.027 user 0m15.185s 00:17:15.027 sys 0m1.809s 00:17:15.027 ************************************ 00:17:15.027 END TEST non_locking_app_on_locked_coremask 00:17:15.027 ************************************ 00:17:15.027 07:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:15.027 07:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:15.027 07:38:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:17:15.027 07:38:14 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:15.027 07:38:14 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:15.027 07:38:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:15.027 ************************************ 00:17:15.027 START TEST locking_app_on_unlocked_coremask 00:17:15.027 ************************************ 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # locking_app_on_unlocked_coremask 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59021 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59021 /var/tmp/spdk.sock 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # '[' -z 59021 ']' 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:15.027 07:38:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:15.027 [2024-10-07 07:38:14.340259] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:15.027 [2024-10-07 07:38:14.340445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:17:15.027 [2024-10-07 07:38:14.527199] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:15.027 [2024-10-07 07:38:14.527274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.285 [2024-10-07 07:38:14.821516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59037 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59037 /var/tmp/spdk2.sock 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # '[' -z 59037 ']' 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:16.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:16.219 07:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:16.477 [2024-10-07 07:38:15.815503] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:16.477 [2024-10-07 07:38:15.815643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:17:16.477 [2024-10-07 07:38:15.989339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.042 [2024-10-07 07:38:16.433303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.571 07:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:19.571 07:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:19.571 07:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59037 00:17:19.571 07:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59037 00:17:19.571 07:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59021 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' -z 59021 ']' 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # kill -0 59021 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # uname 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59021 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:20.507 killing process with pid 59021 00:17:20.507 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:20.508 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59021' 00:17:20.508 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # kill 59021 00:17:20.508 07:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@977 -- # wait 59021 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59037 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' -z 59037 ']' 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # kill -0 59037 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # uname 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59037 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:27.138 killing process with pid 59037 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59037' 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # kill 59037 00:17:27.138 07:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@977 -- # wait 59037 00:17:29.043 00:17:29.043 real 0m14.224s 00:17:29.043 user 0m14.781s 00:17:29.043 sys 0m1.668s 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:29.043 ************************************ 00:17:29.043 END TEST locking_app_on_unlocked_coremask 00:17:29.043 ************************************ 00:17:29.043 07:38:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:29.043 07:38:28 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:29.043 07:38:28 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:29.043 07:38:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:29.043 ************************************ 00:17:29.043 START TEST locking_app_on_locked_coremask 00:17:29.043 ************************************ 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # locking_app_on_locked_coremask 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59213 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59213 /var/tmp/spdk.sock 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 59213 ']' 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:29.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:29.043 07:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:29.043 [2024-10-07 07:38:28.583468] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:29.043 [2024-10-07 07:38:28.583622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:17:29.301 [2024-10-07 07:38:28.747336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.560 [2024-10-07 07:38:28.995152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59234 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59234 /var/tmp/spdk2.sock 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # local es=0 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 59234 /var/tmp/spdk2.sock 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@656 -- # waitforlisten 59234 /var/tmp/spdk2.sock 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # '[' -z 59234 ']' 00:17:30.498 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:30.499 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:30.499 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:30.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:30.499 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:30.499 07:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:30.820 [2024-10-07 07:38:30.079443] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:30.820 [2024-10-07 07:38:30.079653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59234 ] 00:17:30.820 [2024-10-07 07:38:30.252495] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59213 has claimed it. 00:17:30.820 [2024-10-07 07:38:30.252579] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:31.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 849: kill: (59234) - No such process 00:17:31.386 ERROR: process (pid: 59234) is no longer running 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # return 1 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@656 -- # es=1 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59213 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59213 00:17:31.386 07:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59213 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' -z 59213 ']' 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # kill -0 59213 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # uname 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59213 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:31.952 killing process with pid 59213 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59213' 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # kill 59213 00:17:31.952 07:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@977 -- # wait 59213 00:17:35.235 00:17:35.235 real 0m5.576s 00:17:35.235 user 0m5.798s 00:17:35.235 sys 0m0.910s 00:17:35.235 ************************************ 00:17:35.235 END TEST locking_app_on_locked_coremask 00:17:35.235 ************************************ 00:17:35.235 07:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:35.235 07:38:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:35.235 07:38:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:35.235 07:38:34 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:35.235 07:38:34 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:35.235 07:38:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:35.235 ************************************ 00:17:35.235 START TEST locking_overlapped_coremask 00:17:35.235 ************************************ 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # locking_overlapped_coremask 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59309 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59309 /var/tmp/spdk.sock 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # '[' -z 59309 ']' 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:35.235 07:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:35.235 [2024-10-07 07:38:34.232204] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:35.235 [2024-10-07 07:38:34.232353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:17:35.235 [2024-10-07 07:38:34.418850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.235 [2024-10-07 07:38:34.650745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.235 [2024-10-07 07:38:34.650870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.235 [2024-10-07 07:38:34.650914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # return 0 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59333 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59333 /var/tmp/spdk2.sock 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # local es=0 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # valid_exec_arg waitforlisten 59333 /var/tmp/spdk2.sock 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # local arg=waitforlisten 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # type -t waitforlisten 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@656 -- # waitforlisten 59333 /var/tmp/spdk2.sock 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # '[' -z 59333 ']' 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:36.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:36.171 07:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:36.171 [2024-10-07 07:38:35.718424] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:36.171 [2024-10-07 07:38:35.718573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:17:36.429 [2024-10-07 07:38:35.897079] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59309 has claimed it. 00:17:36.429 [2024-10-07 07:38:35.900793] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:36.995 ERROR: process (pid: 59333) is no longer running 00:17:36.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 849: kill: (59333) - No such process 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # return 1 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@656 -- # es=1 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59309 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' -z 59309 ']' 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # kill -0 59309 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # uname 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59309 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:36.995 killing process with pid 59309 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59309' 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # kill 59309 00:17:36.995 07:38:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@977 -- # wait 59309 00:17:40.278 00:17:40.278 real 0m5.279s 00:17:40.278 user 0m14.001s 00:17:40.278 sys 0m0.689s 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:40.278 ************************************ 00:17:40.278 END TEST locking_overlapped_coremask 00:17:40.278 ************************************ 00:17:40.278 07:38:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:40.278 07:38:39 event.cpu_locks -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:40.278 07:38:39 event.cpu_locks -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:40.278 07:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:40.278 ************************************ 00:17:40.278 START TEST locking_overlapped_coremask_via_rpc 00:17:40.278 ************************************ 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # locking_overlapped_coremask_via_rpc 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59402 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59402 /var/tmp/spdk.sock 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 59402 ']' 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:40.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:40.278 07:38:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.278 [2024-10-07 07:38:39.574520] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:40.278 [2024-10-07 07:38:39.574687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59402 ] 00:17:40.278 [2024-10-07 07:38:39.751716] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:40.278 [2024-10-07 07:38:39.751807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.535 [2024-10-07 07:38:40.050417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.535 [2024-10-07 07:38:40.050572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.535 [2024-10-07 07:38:40.050611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59426 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59426 /var/tmp/spdk2.sock 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 59426 ']' 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:41.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:41.539 07:38:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.798 [2024-10-07 07:38:41.178485] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:41.798 [2024-10-07 07:38:41.178666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59426 ] 00:17:42.056 [2024-10-07 07:38:41.365474] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:42.056 [2024-10-07 07:38:41.369727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.622 [2024-10-07 07:38:41.927433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.622 [2024-10-07 07:38:41.934870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.622 [2024-10-07 07:38:41.934897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # local es=0 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@656 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:44.521 07:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.521 [2024-10-07 07:38:43.990028] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59402 has claimed it. 00:17:44.521 request: 00:17:44.521 { 00:17:44.521 "method": "framework_enable_cpumask_locks", 00:17:44.521 "req_id": 1 00:17:44.521 } 00:17:44.521 Got JSON-RPC error response 00:17:44.521 response: 00:17:44.521 { 00:17:44.521 "code": -32603, 00:17:44.521 "message": "Failed to claim CPU core: 2" 00:17:44.521 } 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:17:44.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@656 -- # es=1 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59402 /var/tmp/spdk.sock 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 59402 ']' 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:44.521 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59426 /var/tmp/spdk2.sock 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # '[' -z 59426 ']' 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:44.779 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # return 0 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:45.037 ************************************ 00:17:45.037 END TEST locking_overlapped_coremask_via_rpc 00:17:45.037 ************************************ 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:45.037 00:17:45.037 real 0m5.117s 00:17:45.037 user 0m1.625s 00:17:45.037 sys 0m0.253s 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:45.037 07:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.304 07:38:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:17:45.304 07:38:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59402 ]] 00:17:45.304 07:38:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59402 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 59402 ']' 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 59402 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@958 -- # uname 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59402 00:17:45.304 killing process with pid 59402 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59402' 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@972 -- # kill 59402 00:17:45.304 07:38:44 event.cpu_locks -- common/autotest_common.sh@977 -- # wait 59402 00:17:48.586 07:38:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59426 ]] 00:17:48.586 07:38:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59426 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 59426 ']' 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 59426 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@958 -- # uname 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59426 00:17:48.586 killing process with pid 59426 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@959 -- # process_name=reactor_2 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@963 -- # '[' reactor_2 = sudo ']' 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59426' 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@972 -- # kill 59426 00:17:48.586 07:38:47 event.cpu_locks -- common/autotest_common.sh@977 -- # wait 59426 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59402 ]] 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59402 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 59402 ']' 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 59402 00:17:51.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 957: kill: (59402) - No such process 00:17:51.868 Process with pid 59402 is not found 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@980 -- # echo 'Process with pid 59402 is not found' 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59426 ]] 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59426 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' -z 59426 ']' 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@957 -- # kill -0 59426 00:17:51.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 957: kill: (59426) - No such process 00:17:51.868 Process with pid 59426 is not found 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@980 -- # echo 'Process with pid 59426 is not found' 00:17:51.868 07:38:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:51.868 00:17:51.868 real 1m1.971s 00:17:51.868 user 1m43.972s 00:17:51.868 sys 0m8.314s 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:51.868 07:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:51.868 ************************************ 00:17:51.868 END TEST cpu_locks 00:17:51.868 ************************************ 00:17:51.868 00:17:51.868 real 1m38.293s 00:17:51.868 user 2m54.497s 00:17:51.868 sys 0m13.380s 00:17:51.868 07:38:50 event -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:51.868 07:38:50 event -- common/autotest_common.sh@10 -- # set +x 00:17:51.868 ************************************ 00:17:51.868 END TEST event 00:17:51.868 ************************************ 00:17:51.868 07:38:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:51.868 07:38:50 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:51.868 07:38:50 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:51.868 07:38:50 -- common/autotest_common.sh@10 -- # set +x 00:17:51.868 ************************************ 00:17:51.868 START TEST thread 00:17:51.868 ************************************ 00:17:51.868 07:38:50 thread -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:51.868 * Looking for test storage... 00:17:51.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1626 -- # lcov --version 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:17:51.868 07:38:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.868 07:38:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.868 07:38:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.868 07:38:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.868 07:38:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.868 07:38:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.868 07:38:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.868 07:38:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.868 07:38:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.868 07:38:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.868 07:38:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.868 07:38:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:17:51.868 07:38:51 thread -- scripts/common.sh@345 -- # : 1 00:17:51.868 07:38:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.868 07:38:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.868 07:38:51 thread -- scripts/common.sh@365 -- # decimal 1 00:17:51.868 07:38:51 thread -- scripts/common.sh@353 -- # local d=1 00:17:51.868 07:38:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.868 07:38:51 thread -- scripts/common.sh@355 -- # echo 1 00:17:51.868 07:38:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.868 07:38:51 thread -- scripts/common.sh@366 -- # decimal 2 00:17:51.868 07:38:51 thread -- scripts/common.sh@353 -- # local d=2 00:17:51.868 07:38:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.868 07:38:51 thread -- scripts/common.sh@355 -- # echo 2 00:17:51.868 07:38:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.868 07:38:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.868 07:38:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.868 07:38:51 thread -- scripts/common.sh@368 -- # return 0 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.868 07:38:51 thread -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:17:51.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.868 --rc genhtml_branch_coverage=1 00:17:51.868 --rc genhtml_function_coverage=1 00:17:51.868 --rc genhtml_legend=1 00:17:51.868 --rc geninfo_all_blocks=1 00:17:51.868 --rc geninfo_unexecuted_blocks=1 00:17:51.868 00:17:51.868 ' 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 07:38:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@1104 -- # '[' 8 -le 1 ']' 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:51.869 07:38:51 thread -- common/autotest_common.sh@10 -- # set +x 00:17:51.869 ************************************ 00:17:51.869 START TEST thread_poller_perf 00:17:51.869 ************************************ 00:17:51.869 07:38:51 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:51.869 [2024-10-07 07:38:51.192212] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:51.869 [2024-10-07 07:38:51.192358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:17:51.869 [2024-10-07 07:38:51.362495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.127 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:52.127 [2024-10-07 07:38:51.637346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.501 ====================================== 00:17:53.501 busy:2110214242 (cyc) 00:17:53.501 total_run_count: 348000 00:17:53.501 tsc_hz: 2100000000 (cyc) 00:17:53.501 ====================================== 00:17:53.501 poller_cost: 6063 (cyc), 2887 (nsec) 00:17:53.759 00:17:53.759 real 0m1.931s 00:17:53.759 user 0m1.686s 00:17:53.759 sys 0m0.132s 00:17:53.759 07:38:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:53.760 07:38:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 ************************************ 00:17:53.760 END TEST thread_poller_perf 00:17:53.760 ************************************ 00:17:53.760 07:38:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:53.760 07:38:53 thread -- common/autotest_common.sh@1104 -- # '[' 8 -le 1 ']' 00:17:53.760 07:38:53 thread -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:53.760 07:38:53 thread -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 ************************************ 00:17:53.760 START TEST thread_poller_perf 00:17:53.760 ************************************ 00:17:53.760 07:38:53 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:53.760 [2024-10-07 07:38:53.193067] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:53.760 [2024-10-07 07:38:53.193255] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:17:54.018 [2024-10-07 07:38:53.379336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.277 [2024-10-07 07:38:53.601899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.277 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:55.653 ====================================== 00:17:55.653 busy:2103440942 (cyc) 00:17:55.653 total_run_count: 4846000 00:17:55.653 tsc_hz: 2100000000 (cyc) 00:17:55.653 ====================================== 00:17:55.653 poller_cost: 434 (cyc), 206 (nsec) 00:17:55.653 00:17:55.653 real 0m1.879s 00:17:55.653 user 0m1.645s 00:17:55.653 sys 0m0.125s 00:17:55.653 07:38:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:55.653 07:38:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.653 ************************************ 00:17:55.653 END TEST thread_poller_perf 00:17:55.653 ************************************ 00:17:55.653 07:38:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:55.653 ************************************ 00:17:55.653 END TEST thread 00:17:55.653 ************************************ 00:17:55.653 00:17:55.653 real 0m4.178s 00:17:55.653 user 0m3.524s 00:17:55.653 sys 0m0.438s 00:17:55.653 07:38:55 thread -- common/autotest_common.sh@1129 -- # xtrace_disable 00:17:55.653 07:38:55 thread -- common/autotest_common.sh@10 -- # set +x 00:17:55.653 07:38:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:17:55.653 07:38:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:55.653 07:38:55 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:17:55.653 07:38:55 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:17:55.653 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:17:55.653 ************************************ 00:17:55.653 START TEST app_cmdline 00:17:55.653 ************************************ 00:17:55.653 07:38:55 app_cmdline -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:55.913 * Looking for test storage... 00:17:55.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1626 -- # lcov --version 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:17:55.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.913 07:38:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:17:55.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.913 --rc genhtml_branch_coverage=1 00:17:55.913 --rc genhtml_function_coverage=1 00:17:55.913 --rc genhtml_legend=1 00:17:55.913 --rc geninfo_all_blocks=1 00:17:55.913 --rc geninfo_unexecuted_blocks=1 00:17:55.913 00:17:55.913 ' 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:17:55.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.913 --rc genhtml_branch_coverage=1 00:17:55.913 --rc genhtml_function_coverage=1 00:17:55.913 --rc genhtml_legend=1 00:17:55.913 --rc geninfo_all_blocks=1 00:17:55.913 --rc geninfo_unexecuted_blocks=1 00:17:55.913 00:17:55.913 ' 00:17:55.913 07:38:55 app_cmdline -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:17:55.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.913 --rc genhtml_branch_coverage=1 00:17:55.913 --rc genhtml_function_coverage=1 00:17:55.914 --rc genhtml_legend=1 00:17:55.914 --rc geninfo_all_blocks=1 00:17:55.914 --rc geninfo_unexecuted_blocks=1 00:17:55.914 00:17:55.914 ' 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:17:55.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.914 --rc genhtml_branch_coverage=1 00:17:55.914 --rc genhtml_function_coverage=1 00:17:55.914 --rc genhtml_legend=1 00:17:55.914 --rc geninfo_all_blocks=1 00:17:55.914 --rc geninfo_unexecuted_blocks=1 00:17:55.914 00:17:55.914 ' 00:17:55.914 07:38:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:55.914 07:38:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59786 00:17:55.914 07:38:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59786 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@834 -- # '[' -z 59786 ']' 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@839 -- # local max_retries=100 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.914 07:38:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@843 -- # xtrace_disable 00:17:55.914 07:38:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:56.173 [2024-10-07 07:38:55.504432] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:17:56.173 [2024-10-07 07:38:55.504757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:17:56.173 [2024-10-07 07:38:55.673085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.431 [2024-10-07 07:38:55.902270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.365 07:38:56 app_cmdline -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:17:57.365 07:38:56 app_cmdline -- common/autotest_common.sh@867 -- # return 0 00:17:57.365 07:38:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:57.623 { 00:17:57.623 "version": "SPDK v25.01-pre git sha1 70750b651", 00:17:57.623 "fields": { 00:17:57.623 "major": 25, 00:17:57.623 "minor": 1, 00:17:57.623 "patch": 0, 00:17:57.623 "suffix": "-pre", 00:17:57.623 "commit": "70750b651" 00:17:57.623 } 00:17:57.623 } 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:57.623 07:38:57 app_cmdline -- common/autotest_common.sh@564 -- # xtrace_disable 00:17:57.623 07:38:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:57.623 07:38:57 app_cmdline -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:57.623 07:38:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:57.623 07:38:57 app_cmdline -- common/autotest_common.sh@653 -- # local es=0 00:17:57.623 07:38:57 app_cmdline -- common/autotest_common.sh@655 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@641 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@645 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@647 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@647 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@647 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:57.624 07:38:57 app_cmdline -- common/autotest_common.sh@656 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:57.881 request: 00:17:57.881 { 00:17:57.881 "method": "env_dpdk_get_mem_stats", 00:17:57.881 "req_id": 1 00:17:57.881 } 00:17:57.881 Got JSON-RPC error response 00:17:57.881 response: 00:17:57.881 { 00:17:57.881 "code": -32601, 00:17:57.881 "message": "Method not found" 00:17:57.881 } 00:17:57.881 07:38:57 app_cmdline -- common/autotest_common.sh@656 -- # es=1 00:17:57.881 07:38:57 app_cmdline -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:17:57.881 07:38:57 app_cmdline -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:17:57.881 07:38:57 app_cmdline -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:17:57.881 07:38:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59786 00:17:57.881 07:38:57 app_cmdline -- common/autotest_common.sh@953 -- # '[' -z 59786 ']' 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@957 -- # kill -0 59786 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@958 -- # uname 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59786 00:17:57.882 killing process with pid 59786 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59786' 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@972 -- # kill 59786 00:17:57.882 07:38:57 app_cmdline -- common/autotest_common.sh@977 -- # wait 59786 00:18:01.182 ************************************ 00:18:01.182 END TEST app_cmdline 00:18:01.182 ************************************ 00:18:01.182 00:18:01.182 real 0m5.029s 00:18:01.182 user 0m5.412s 00:18:01.182 sys 0m0.687s 00:18:01.182 07:39:00 app_cmdline -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:01.182 07:39:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:01.182 07:39:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:01.182 07:39:00 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:18:01.182 07:39:00 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:01.182 07:39:00 -- common/autotest_common.sh@10 -- # set +x 00:18:01.182 ************************************ 00:18:01.182 START TEST version 00:18:01.182 ************************************ 00:18:01.182 07:39:00 version -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:01.182 * Looking for test storage... 00:18:01.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:01.182 07:39:00 version -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:18:01.182 07:39:00 version -- common/autotest_common.sh@1626 -- # lcov --version 00:18:01.182 07:39:00 version -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:18:01.182 07:39:00 version -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:18:01.182 07:39:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.182 07:39:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.182 07:39:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.182 07:39:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.182 07:39:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.182 07:39:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.182 07:39:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.182 07:39:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.182 07:39:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.182 07:39:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.182 07:39:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.182 07:39:00 version -- scripts/common.sh@344 -- # case "$op" in 00:18:01.182 07:39:00 version -- scripts/common.sh@345 -- # : 1 00:18:01.182 07:39:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.182 07:39:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.182 07:39:00 version -- scripts/common.sh@365 -- # decimal 1 00:18:01.182 07:39:00 version -- scripts/common.sh@353 -- # local d=1 00:18:01.183 07:39:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.183 07:39:00 version -- scripts/common.sh@355 -- # echo 1 00:18:01.183 07:39:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.183 07:39:00 version -- scripts/common.sh@366 -- # decimal 2 00:18:01.183 07:39:00 version -- scripts/common.sh@353 -- # local d=2 00:18:01.183 07:39:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.183 07:39:00 version -- scripts/common.sh@355 -- # echo 2 00:18:01.183 07:39:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.183 07:39:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.183 07:39:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.183 07:39:00 version -- scripts/common.sh@368 -- # return 0 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 07:39:00 version -- app/version.sh@17 -- # get_header_version major 00:18:01.183 07:39:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # cut -f2 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # tr -d '"' 00:18:01.183 07:39:00 version -- app/version.sh@17 -- # major=25 00:18:01.183 07:39:00 version -- app/version.sh@18 -- # get_header_version minor 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # cut -f2 00:18:01.183 07:39:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # tr -d '"' 00:18:01.183 07:39:00 version -- app/version.sh@18 -- # minor=1 00:18:01.183 07:39:00 version -- app/version.sh@19 -- # get_header_version patch 00:18:01.183 07:39:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # cut -f2 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # tr -d '"' 00:18:01.183 07:39:00 version -- app/version.sh@19 -- # patch=0 00:18:01.183 07:39:00 version -- app/version.sh@20 -- # get_header_version suffix 00:18:01.183 07:39:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # cut -f2 00:18:01.183 07:39:00 version -- app/version.sh@14 -- # tr -d '"' 00:18:01.183 07:39:00 version -- app/version.sh@20 -- # suffix=-pre 00:18:01.183 07:39:00 version -- app/version.sh@22 -- # version=25.1 00:18:01.183 07:39:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:01.183 07:39:00 version -- app/version.sh@28 -- # version=25.1rc0 00:18:01.183 07:39:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:01.183 07:39:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:01.183 07:39:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:01.183 07:39:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:01.183 00:18:01.183 real 0m0.339s 00:18:01.183 user 0m0.212s 00:18:01.183 sys 0m0.172s 00:18:01.183 ************************************ 00:18:01.183 END TEST version 00:18:01.183 ************************************ 00:18:01.183 07:39:00 version -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:01.183 07:39:00 version -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 07:39:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:01.183 07:39:00 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:18:01.183 07:39:00 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:01.183 07:39:00 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:18:01.183 07:39:00 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:01.183 07:39:00 -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 ************************************ 00:18:01.183 START TEST bdev_raid 00:18:01.183 ************************************ 00:18:01.183 07:39:00 bdev_raid -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:01.183 * Looking for test storage... 00:18:01.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1626 -- # lcov --version 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@345 -- # : 1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.441 07:39:00 bdev_raid -- scripts/common.sh@368 -- # return 0 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:18:01.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.441 --rc genhtml_branch_coverage=1 00:18:01.441 --rc genhtml_function_coverage=1 00:18:01.441 --rc genhtml_legend=1 00:18:01.441 --rc geninfo_all_blocks=1 00:18:01.441 --rc geninfo_unexecuted_blocks=1 00:18:01.441 00:18:01.441 ' 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:18:01.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.441 --rc genhtml_branch_coverage=1 00:18:01.441 --rc genhtml_function_coverage=1 00:18:01.441 --rc genhtml_legend=1 00:18:01.441 --rc geninfo_all_blocks=1 00:18:01.441 --rc geninfo_unexecuted_blocks=1 00:18:01.441 00:18:01.441 ' 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:18:01.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.441 --rc genhtml_branch_coverage=1 00:18:01.441 --rc genhtml_function_coverage=1 00:18:01.441 --rc genhtml_legend=1 00:18:01.441 --rc geninfo_all_blocks=1 00:18:01.441 --rc geninfo_unexecuted_blocks=1 00:18:01.441 00:18:01.441 ' 00:18:01.441 07:39:00 bdev_raid -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:18:01.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.441 --rc genhtml_branch_coverage=1 00:18:01.441 --rc genhtml_function_coverage=1 00:18:01.441 --rc genhtml_legend=1 00:18:01.441 --rc geninfo_all_blocks=1 00:18:01.441 --rc geninfo_unexecuted_blocks=1 00:18:01.441 00:18:01.441 ' 00:18:01.441 07:39:00 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:01.442 07:39:00 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:01.442 07:39:00 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:18:01.442 07:39:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:18:01.442 07:39:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:18:01.442 07:39:00 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:18:01.442 07:39:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:18:01.442 07:39:00 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:18:01.442 07:39:00 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:01.442 07:39:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.442 ************************************ 00:18:01.442 START TEST raid1_resize_data_offset_test 00:18:01.442 ************************************ 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # raid_resize_data_offset_test 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59991 00:18:01.442 Process raid pid: 59991 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59991' 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59991 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@834 -- # '[' -z 59991 ']' 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:01.442 07:39:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 [2024-10-07 07:39:01.025344] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:01.700 [2024-10-07 07:39:01.025526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.700 [2024-10-07 07:39:01.218109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.276 [2024-10-07 07:39:01.532632] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.276 [2024-10-07 07:39:01.766228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.276 [2024-10-07 07:39:01.766279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.534 07:39:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:02.534 07:39:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@867 -- # return 0 00:18:02.534 07:39:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:18:02.534 07:39:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.534 07:39:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.534 malloc0 00:18:02.534 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.534 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:18:02.534 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.534 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.792 malloc1 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.792 null0 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.792 [2024-10-07 07:39:02.190101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:18:02.792 [2024-10-07 07:39:02.192307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.792 [2024-10-07 07:39:02.192362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:18:02.792 [2024-10-07 07:39:02.192529] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.792 [2024-10-07 07:39:02.192543] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:18:02.792 [2024-10-07 07:39:02.192894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:02.792 [2024-10-07 07:39:02.193064] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.792 [2024-10-07 07:39:02.193080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:02.792 [2024-10-07 07:39:02.193258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.792 [2024-10-07 07:39:02.234085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:02.792 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 malloc2 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 [2024-10-07 07:39:02.837074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:03.359 [2024-10-07 07:39:02.855682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:03.359 [2024-10-07 07:39:02.858055] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59991 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@953 -- # '[' -z 59991 ']' 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # kill -0 59991 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # uname 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:03.359 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 59991 00:18:03.618 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:03.618 killing process with pid 59991 00:18:03.618 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:03.618 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 59991' 00:18:03.618 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # kill 59991 00:18:03.618 [2024-10-07 07:39:02.949467] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.618 07:39:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@977 -- # wait 59991 00:18:03.618 [2024-10-07 07:39:02.950160] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:18:03.618 [2024-10-07 07:39:02.950229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.618 [2024-10-07 07:39:02.950249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:18:03.618 [2024-10-07 07:39:02.981183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.618 [2024-10-07 07:39:02.981545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.618 [2024-10-07 07:39:02.981571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:05.521 [2024-10-07 07:39:04.905924] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.897 07:39:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:18:06.897 00:18:06.897 real 0m5.378s 00:18:06.897 user 0m5.347s 00:18:06.897 sys 0m0.643s 00:18:06.897 07:39:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:06.897 07:39:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.897 ************************************ 00:18:06.897 END TEST raid1_resize_data_offset_test 00:18:06.897 ************************************ 00:18:06.897 07:39:06 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:18:06.897 07:39:06 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:06.897 07:39:06 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:06.897 07:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.897 ************************************ 00:18:06.897 START TEST raid0_resize_superblock_test 00:18:06.897 ************************************ 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # raid_resize_superblock_test 0 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60086 00:18:06.897 Process raid pid: 60086 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60086' 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60086 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 60086 ']' 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:06.897 07:39:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.897 [2024-10-07 07:39:06.399817] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:06.897 [2024-10-07 07:39:06.399969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.155 [2024-10-07 07:39:06.560691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.413 [2024-10-07 07:39:06.793374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.670 [2024-10-07 07:39:07.021652] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.670 [2024-10-07 07:39:07.021725] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.929 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:07.929 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:18:07.929 07:39:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:07.929 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:07.929 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.498 malloc0 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.498 [2024-10-07 07:39:07.991729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:08.498 [2024-10-07 07:39:07.991807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.498 [2024-10-07 07:39:07.991836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.498 [2024-10-07 07:39:07.991852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.498 [2024-10-07 07:39:07.994441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.498 [2024-10-07 07:39:07.994492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:08.498 pt0 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.498 07:39:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 a74fa38d-354d-48e7-a70f-b00799bdc571 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 aae52aeb-6e63-4306-8904-b58013c5bc0d 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 e945bbd1-585f-408e-b796-b0c899f8b8b9 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 [2024-10-07 07:39:08.133352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aae52aeb-6e63-4306-8904-b58013c5bc0d is claimed 00:18:08.772 [2024-10-07 07:39:08.133507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e945bbd1-585f-408e-b796-b0c899f8b8b9 is claimed 00:18:08.772 [2024-10-07 07:39:08.133675] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.772 [2024-10-07 07:39:08.133699] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:18:08.772 [2024-10-07 07:39:08.134084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:08.772 [2024-10-07 07:39:08.134326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.772 [2024-10-07 07:39:08.134341] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:08.772 [2024-10-07 07:39:08.134564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 [2024-10-07 07:39:08.233825] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 [2024-10-07 07:39:08.273884] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:08.772 [2024-10-07 07:39:08.273952] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aae52aeb-6e63-4306-8904-b58013c5bc0d' was resized: old size 131072, new size 204800 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.772 [2024-10-07 07:39:08.281613] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:08.772 [2024-10-07 07:39:08.281653] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e945bbd1-585f-408e-b796-b0c899f8b8b9' was resized: old size 131072, new size 204800 00:18:08.772 [2024-10-07 07:39:08.281700] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.772 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:08.773 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 [2024-10-07 07:39:08.373904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 [2024-10-07 07:39:08.409520] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:09.035 [2024-10-07 07:39:08.409657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:09.035 [2024-10-07 07:39:08.409686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.035 [2024-10-07 07:39:08.409736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:09.035 [2024-10-07 07:39:08.409932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.035 [2024-10-07 07:39:08.409998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.035 [2024-10-07 07:39:08.410022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 [2024-10-07 07:39:08.417384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:09.035 [2024-10-07 07:39:08.417472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.035 [2024-10-07 07:39:08.417509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:09.035 [2024-10-07 07:39:08.417534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.035 [2024-10-07 07:39:08.421655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.035 [2024-10-07 07:39:08.421730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:09.035 pt0 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 [2024-10-07 07:39:08.424517] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aae52aeb-6e63-4306-8904-b58013c5bc0d 00:18:09.035 [2024-10-07 07:39:08.424634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aae52aeb-6e63-4306-8904-b58013c5bc0d is claimed 00:18:09.035 [2024-10-07 07:39:08.424865] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e945bbd1-585f-408e-b796-b0c899f8b8b9 00:18:09.035 [2024-10-07 07:39:08.424903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e945bbd1-585f-408e-b796-b0c899f8b8b9 is claimed 00:18:09.035 [2024-10-07 07:39:08.425089] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e945bbd1-585f-408e-b796-b0c899f8b8b9 (2) smaller than existing raid bdev Raid (3) 00:18:09.035 [2024-10-07 07:39:08.425125] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev aae52aeb-6e63-4306-8904-b58013c5bc0d: File exists 00:18:09.035 [2024-10-07 07:39:08.425188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:09.035 [2024-10-07 07:39:08.425212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:18:09.035 [2024-10-07 07:39:08.425620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:09.035 [2024-10-07 07:39:08.425882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:09.035 [2024-10-07 07:39:08.425901] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:09.035 [2024-10-07 07:39:08.426280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:18:09.035 [2024-10-07 07:39:08.438498] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60086 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 60086 ']' 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # kill -0 60086 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # uname 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60086 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:09.035 killing process with pid 60086 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60086' 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # kill 60086 00:18:09.035 07:39:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@977 -- # wait 60086 00:18:09.035 [2024-10-07 07:39:08.517176] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.035 [2024-10-07 07:39:08.517337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.035 [2024-10-07 07:39:08.517412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.035 [2024-10-07 07:39:08.517426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:10.938 [2024-10-07 07:39:10.209122] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.317 07:39:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:12.317 00:18:12.317 real 0m5.391s 00:18:12.317 user 0m5.624s 00:18:12.317 sys 0m0.656s 00:18:12.317 ************************************ 00:18:12.317 END TEST raid0_resize_superblock_test 00:18:12.317 ************************************ 00:18:12.317 07:39:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:12.317 07:39:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.317 07:39:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:18:12.317 07:39:11 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:12.317 07:39:11 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:12.317 07:39:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.317 ************************************ 00:18:12.317 START TEST raid1_resize_superblock_test 00:18:12.317 ************************************ 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # raid_resize_superblock_test 1 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60191 00:18:12.317 Process raid pid: 60191 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60191' 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60191 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 60191 ']' 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:12.317 07:39:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.576 [2024-10-07 07:39:11.881064] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:12.576 [2024-10-07 07:39:11.881240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.576 [2024-10-07 07:39:12.064317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.835 [2024-10-07 07:39:12.343261] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.095 [2024-10-07 07:39:12.606959] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.095 [2024-10-07 07:39:12.607018] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.354 07:39:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:13.354 07:39:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:18:13.354 07:39:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:13.354 07:39:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:13.354 07:39:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.290 malloc0 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.290 [2024-10-07 07:39:13.664568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:14.290 [2024-10-07 07:39:13.664678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.290 [2024-10-07 07:39:13.664723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:14.290 [2024-10-07 07:39:13.664743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.290 [2024-10-07 07:39:13.668047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.290 [2024-10-07 07:39:13.668097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:14.290 pt0 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.290 0fc79cf7-c321-4841-a4b1-c0a054d9b733 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.290 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 9833c8a4-b1c0-46f0-b982-8797ef8cd08d 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 56796268-6271-401b-9409-234f384a190a 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 [2024-10-07 07:39:13.868986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9833c8a4-b1c0-46f0-b982-8797ef8cd08d is claimed 00:18:14.550 [2024-10-07 07:39:13.869185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56796268-6271-401b-9409-234f384a190a is claimed 00:18:14.550 [2024-10-07 07:39:13.869442] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:14.550 [2024-10-07 07:39:13.869479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:18:14.550 [2024-10-07 07:39:13.870016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:14.550 [2024-10-07 07:39:13.870348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:14.550 [2024-10-07 07:39:13.870371] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:14.550 [2024-10-07 07:39:13.870659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:18:14.550 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.550 [2024-10-07 07:39:13.973189] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.551 07:39:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.551 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:14.551 07:39:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.551 [2024-10-07 07:39:14.017231] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:14.551 [2024-10-07 07:39:14.017289] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9833c8a4-b1c0-46f0-b982-8797ef8cd08d' was resized: old size 131072, new size 204800 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.551 [2024-10-07 07:39:14.025101] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:14.551 [2024-10-07 07:39:14.025137] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '56796268-6271-401b-9409-234f384a190a' was resized: old size 131072, new size 204800 00:18:14.551 [2024-10-07 07:39:14.025175] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:14.551 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.811 [2024-10-07 07:39:14.113287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.811 [2024-10-07 07:39:14.144974] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:14.811 [2024-10-07 07:39:14.145100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:14.811 [2024-10-07 07:39:14.145157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:14.811 [2024-10-07 07:39:14.145381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.811 [2024-10-07 07:39:14.145641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.811 [2024-10-07 07:39:14.145741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.811 [2024-10-07 07:39:14.145769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.811 [2024-10-07 07:39:14.152888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:14.811 [2024-10-07 07:39:14.152995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.811 [2024-10-07 07:39:14.153029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:14.811 [2024-10-07 07:39:14.153049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.811 [2024-10-07 07:39:14.156504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.811 [2024-10-07 07:39:14.156552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:14.811 pt0 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.811 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.811 [2024-10-07 07:39:14.158882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9833c8a4-b1c0-46f0-b982-8797ef8cd08d 00:18:14.812 [2024-10-07 07:39:14.158974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9833c8a4-b1c0-46f0-b982-8797ef8cd08d is claimed 00:18:14.812 [2024-10-07 07:39:14.159110] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 56796268-6271-401b-9409-234f384a190a 00:18:14.812 [2024-10-07 07:39:14.159136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 56796268-6271-401b-9409-234f384a190a is claimed 00:18:14.812 [2024-10-07 07:39:14.159300] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 56796268-6271-401b-9409-234f384a190a (2) smaller than existing raid bdev Raid (3) 00:18:14.812 [2024-10-07 07:39:14.159328] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9833c8a4-b1c0-46f0-b982-8797ef8cd08d: File exists 00:18:14.812 [2024-10-07 07:39:14.159380] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:14.812 [2024-10-07 07:39:14.159410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:14.812 [2024-10-07 07:39:14.159760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:14.812 [2024-10-07 07:39:14.159947] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:14.812 [2024-10-07 07:39:14.159958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:14.812 [2024-10-07 07:39:14.160123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.812 [2024-10-07 07:39:14.173239] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60191 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 60191 ']' 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # kill -0 60191 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # uname 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60191 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:14.812 killing process with pid 60191 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60191' 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # kill 60191 00:18:14.812 07:39:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@977 -- # wait 60191 00:18:14.812 [2024-10-07 07:39:14.249621] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.812 [2024-10-07 07:39:14.249782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.812 [2024-10-07 07:39:14.249861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.812 [2024-10-07 07:39:14.249876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:16.717 [2024-10-07 07:39:15.942361] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.093 07:39:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:18.093 00:18:18.093 real 0m5.524s 00:18:18.093 user 0m5.610s 00:18:18.093 sys 0m0.863s 00:18:18.093 07:39:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:18.093 07:39:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.093 ************************************ 00:18:18.093 END TEST raid1_resize_superblock_test 00:18:18.093 ************************************ 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:18:18.093 07:39:17 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:18:18.093 07:39:17 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:18.093 07:39:17 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:18.093 07:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.093 ************************************ 00:18:18.093 START TEST raid_function_test_raid0 00:18:18.093 ************************************ 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # raid_function_test raid0 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:18:18.093 Process raid pid: 60299 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60299 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60299' 00:18:18.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60299 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # '[' -z 60299 ']' 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:18.093 07:39:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.093 [2024-10-07 07:39:17.493348] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:18.093 [2024-10-07 07:39:17.493519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.351 [2024-10-07 07:39:17.677044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.609 [2024-10-07 07:39:17.934852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.609 [2024-10-07 07:39:18.166497] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.609 [2024-10-07 07:39:18.166546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # return 0 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.866 Base_1 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:18.866 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:19.125 Base_2 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:19.125 [2024-10-07 07:39:18.471747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:19.125 [2024-10-07 07:39:18.474006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:19.125 [2024-10-07 07:39:18.474094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:19.125 [2024-10-07 07:39:18.474110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:19.125 [2024-10-07 07:39:18.474431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:19.125 [2024-10-07 07:39:18.474591] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:19.125 [2024-10-07 07:39:18.474602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:18:19.125 [2024-10-07 07:39:18.474808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.125 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:18:19.382 [2024-10-07 07:39:18.711820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:19.382 /dev/nbd0 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local i 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # break 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.382 1+0 records in 00:18:19.382 1+0 records out 00:18:19.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251224 s, 16.3 MB/s 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # size=4096 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # return 0 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.382 07:39:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:19.640 { 00:18:19.640 "nbd_device": "/dev/nbd0", 00:18:19.640 "bdev_name": "raid" 00:18:19.640 } 00:18:19.640 ]' 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:19.640 { 00:18:19.640 "nbd_device": "/dev/nbd0", 00:18:19.640 "bdev_name": "raid" 00:18:19.640 } 00:18:19.640 ]' 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:18:19.640 4096+0 records in 00:18:19.640 4096+0 records out 00:18:19.640 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0372537 s, 56.3 MB/s 00:18:19.640 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:18:20.205 4096+0 records in 00:18:20.205 4096+0 records out 00:18:20.205 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.287193 s, 7.3 MB/s 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:18:20.205 128+0 records in 00:18:20.205 128+0 records out 00:18:20.205 65536 bytes (66 kB, 64 KiB) copied, 0.00141008 s, 46.5 MB/s 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:18:20.205 2035+0 records in 00:18:20.205 2035+0 records out 00:18:20.205 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0173676 s, 60.0 MB/s 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:18:20.205 456+0 records in 00:18:20.205 456+0 records out 00:18:20.205 233472 bytes (233 kB, 228 KiB) copied, 0.00416145 s, 56.1 MB/s 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.205 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.463 [2024-10-07 07:39:19.892608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.463 07:39:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60299 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' -z 60299 ']' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # kill -0 60299 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # uname 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:20.722 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60299 00:18:20.981 killing process with pid 60299 00:18:20.981 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:20.981 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:20.981 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60299' 00:18:20.981 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # kill 60299 00:18:20.981 [2024-10-07 07:39:20.297943] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.981 [2024-10-07 07:39:20.298056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.981 07:39:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@977 -- # wait 60299 00:18:20.981 [2024-10-07 07:39:20.298104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.981 [2024-10-07 07:39:20.298118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:18:20.981 [2024-10-07 07:39:20.522529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.357 ************************************ 00:18:22.357 END TEST raid_function_test_raid0 00:18:22.357 ************************************ 00:18:22.357 07:39:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:18:22.357 00:18:22.357 real 0m4.506s 00:18:22.357 user 0m5.168s 00:18:22.357 sys 0m1.210s 00:18:22.357 07:39:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:22.357 07:39:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:22.616 07:39:21 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:18:22.616 07:39:21 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:22.616 07:39:21 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:22.616 07:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.616 ************************************ 00:18:22.616 START TEST raid_function_test_concat 00:18:22.616 ************************************ 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # raid_function_test concat 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60439 00:18:22.616 Process raid pid: 60439 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60439' 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60439 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # '[' -z 60439 ']' 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:22.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:22.616 07:39:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.617 [2024-10-07 07:39:22.035586] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:22.617 [2024-10-07 07:39:22.035719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.875 [2024-10-07 07:39:22.203021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.136 [2024-10-07 07:39:22.436748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.136 [2024-10-07 07:39:22.661937] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.136 [2024-10-07 07:39:22.661993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # return 0 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:23.742 Base_1 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:23.742 Base_2 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:23.742 [2024-10-07 07:39:23.199858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:23.742 [2024-10-07 07:39:23.201925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:23.742 [2024-10-07 07:39:23.201999] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:23.742 [2024-10-07 07:39:23.202013] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:23.742 [2024-10-07 07:39:23.202286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:23.742 [2024-10-07 07:39:23.202438] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:23.742 [2024-10-07 07:39:23.202448] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:18:23.742 [2024-10-07 07:39:23.202621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:18:23.742 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:23.743 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:23.743 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:18:24.002 [2024-10-07 07:39:23.479958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:24.002 /dev/nbd0 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local i 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # break 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.002 1+0 records in 00:18:24.002 1+0 records out 00:18:24.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271659 s, 15.1 MB/s 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # size=4096 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # return 0 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.002 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:24.570 { 00:18:24.570 "nbd_device": "/dev/nbd0", 00:18:24.570 "bdev_name": "raid" 00:18:24.570 } 00:18:24.570 ]' 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:24.570 { 00:18:24.570 "nbd_device": "/dev/nbd0", 00:18:24.570 "bdev_name": "raid" 00:18:24.570 } 00:18:24.570 ]' 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:18:24.570 4096+0 records in 00:18:24.570 4096+0 records out 00:18:24.570 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0283327 s, 74.0 MB/s 00:18:24.570 07:39:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:18:24.828 4096+0 records in 00:18:24.828 4096+0 records out 00:18:24.828 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251463 s, 8.3 MB/s 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:18:24.828 128+0 records in 00:18:24.828 128+0 records out 00:18:24.828 65536 bytes (66 kB, 64 KiB) copied, 0.00162799 s, 40.3 MB/s 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:18:24.828 2035+0 records in 00:18:24.828 2035+0 records out 00:18:24.828 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142574 s, 73.1 MB/s 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.828 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:18:24.829 456+0 records in 00:18:24.829 456+0 records out 00:18:24.829 233472 bytes (233 kB, 228 KiB) copied, 0.00275719 s, 84.7 MB/s 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.829 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.086 [2024-10-07 07:39:24.618467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.086 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:25.344 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:25.344 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:25.344 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60439 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' -z 60439 ']' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # kill -0 60439 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # uname 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60439 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:25.602 killing process with pid 60439 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60439' 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # kill 60439 00:18:25.602 07:39:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@977 -- # wait 60439 00:18:25.602 [2024-10-07 07:39:24.980751] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.602 [2024-10-07 07:39:24.980878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.602 [2024-10-07 07:39:24.980953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.602 [2024-10-07 07:39:24.980971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:18:25.860 [2024-10-07 07:39:25.202347] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.237 07:39:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:18:27.237 00:18:27.237 real 0m4.585s 00:18:27.237 user 0m5.444s 00:18:27.237 sys 0m1.134s 00:18:27.237 07:39:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:27.237 07:39:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:27.237 ************************************ 00:18:27.237 END TEST raid_function_test_concat 00:18:27.237 ************************************ 00:18:27.237 07:39:26 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:18:27.237 07:39:26 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:27.237 07:39:26 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:27.237 07:39:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.237 ************************************ 00:18:27.237 START TEST raid0_resize_test 00:18:27.237 ************************************ 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # raid_resize_test 0 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60569 00:18:27.237 Process raid pid: 60569 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60569' 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60569 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # '[' -z 60569 ']' 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:27.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:27.237 07:39:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.237 [2024-10-07 07:39:26.675913] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:27.237 [2024-10-07 07:39:26.676041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.496 [2024-10-07 07:39:26.842581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.754 [2024-10-07 07:39:27.070710] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.754 [2024-10-07 07:39:27.291394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.754 [2024-10-07 07:39:27.291445] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@867 -- # return 0 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 Base_1 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 Base_2 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 [2024-10-07 07:39:27.616800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:28.322 [2024-10-07 07:39:27.619029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:28.322 [2024-10-07 07:39:27.619092] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.322 [2024-10-07 07:39:27.619106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:28.322 [2024-10-07 07:39:27.619385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:28.322 [2024-10-07 07:39:27.619560] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.322 [2024-10-07 07:39:27.619586] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:28.322 [2024-10-07 07:39:27.619778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 [2024-10-07 07:39:27.624742] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:28.322 [2024-10-07 07:39:27.624775] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:18:28.322 true 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 [2024-10-07 07:39:27.636897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 [2024-10-07 07:39:27.676814] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:28.322 [2024-10-07 07:39:27.676859] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:18:28.322 [2024-10-07 07:39:27.676890] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:18:28.322 true 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:18:28.322 [2024-10-07 07:39:27.688963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.322 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60569 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' -z 60569 ']' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # kill -0 60569 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # uname 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60569 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:28.323 killing process with pid 60569 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60569' 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # kill 60569 00:18:28.323 [2024-10-07 07:39:27.770472] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.323 [2024-10-07 07:39:27.770567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.323 07:39:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@977 -- # wait 60569 00:18:28.323 [2024-10-07 07:39:27.770621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.323 [2024-10-07 07:39:27.770633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:28.323 [2024-10-07 07:39:27.789919] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.715 07:39:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:18:29.715 00:18:29.715 real 0m2.607s 00:18:29.715 user 0m2.765s 00:18:29.715 sys 0m0.390s 00:18:29.715 07:39:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:29.715 07:39:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 ************************************ 00:18:29.715 END TEST raid0_resize_test 00:18:29.715 ************************************ 00:18:29.715 07:39:29 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:18:29.715 07:39:29 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:18:29.715 07:39:29 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:29.715 07:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 ************************************ 00:18:29.715 START TEST raid1_resize_test 00:18:29.715 ************************************ 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # raid_resize_test 1 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60635 00:18:29.715 Process raid pid: 60635 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60635' 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60635 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@834 -- # '[' -z 60635 ']' 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:29.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:29.715 07:39:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.972 [2024-10-07 07:39:29.367293] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:29.972 [2024-10-07 07:39:29.367490] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.230 [2024-10-07 07:39:29.562686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.488 [2024-10-07 07:39:29.886185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.745 [2024-10-07 07:39:30.133343] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.745 [2024-10-07 07:39:30.133395] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@867 -- # return 0 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 Base_1 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 Base_2 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 [2024-10-07 07:39:30.410243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:31.005 [2024-10-07 07:39:30.412288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:31.005 [2024-10-07 07:39:30.412351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.005 [2024-10-07 07:39:30.412364] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:31.005 [2024-10-07 07:39:30.412619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:31.005 [2024-10-07 07:39:30.412756] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.005 [2024-10-07 07:39:30.412767] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:31.005 [2024-10-07 07:39:30.412950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 [2024-10-07 07:39:30.418201] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:31.005 [2024-10-07 07:39:30.418229] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:18:31.005 true 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:18:31.005 [2024-10-07 07:39:30.430350] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 [2024-10-07 07:39:30.466266] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:31.005 [2024-10-07 07:39:30.466300] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:18:31.005 [2024-10-07 07:39:30.466339] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:18:31.005 true 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:18:31.005 [2024-10-07 07:39:30.478417] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:18:31.005 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60635 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@953 -- # '[' -z 60635 ']' 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # kill -0 60635 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # uname 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60635 00:18:31.006 killing process with pid 60635 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60635' 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # kill 60635 00:18:31.006 [2024-10-07 07:39:30.561412] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.006 07:39:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@977 -- # wait 60635 00:18:31.006 [2024-10-07 07:39:30.561518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.006 [2024-10-07 07:39:30.562086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.006 [2024-10-07 07:39:30.562112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:31.306 [2024-10-07 07:39:30.582826] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.689 07:39:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:18:32.689 00:18:32.689 real 0m2.694s 00:18:32.689 user 0m2.960s 00:18:32.689 sys 0m0.415s 00:18:32.689 07:39:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:32.689 ************************************ 00:18:32.689 END TEST raid1_resize_test 00:18:32.689 ************************************ 00:18:32.689 07:39:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.689 07:39:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:18:32.689 07:39:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:32.689 07:39:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:18:32.689 07:39:31 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:18:32.689 07:39:31 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:32.689 07:39:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.689 ************************************ 00:18:32.689 START TEST raid_state_function_test 00:18:32.689 ************************************ 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 2 false 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.689 Process raid pid: 60699 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:32.689 07:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:32.689 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:32.689 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:32.689 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:32.689 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:32.689 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60699 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60699' 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60699 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 60699 ']' 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:32.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:32.690 07:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.690 [2024-10-07 07:39:32.122245] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:32.690 [2024-10-07 07:39:32.122431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.948 [2024-10-07 07:39:32.309971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.207 [2024-10-07 07:39:32.534188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.207 [2024-10-07 07:39:32.766173] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.465 [2024-10-07 07:39:32.766433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.724 [2024-10-07 07:39:33.143829] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.724 [2024-10-07 07:39:33.144058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.724 [2024-10-07 07:39:33.144084] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.724 [2024-10-07 07:39:33.144102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.724 "name": "Existed_Raid", 00:18:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.724 "strip_size_kb": 64, 00:18:33.724 "state": "configuring", 00:18:33.724 "raid_level": "raid0", 00:18:33.724 "superblock": false, 00:18:33.724 "num_base_bdevs": 2, 00:18:33.724 "num_base_bdevs_discovered": 0, 00:18:33.724 "num_base_bdevs_operational": 2, 00:18:33.724 "base_bdevs_list": [ 00:18:33.724 { 00:18:33.724 "name": "BaseBdev1", 00:18:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.724 "is_configured": false, 00:18:33.724 "data_offset": 0, 00:18:33.724 "data_size": 0 00:18:33.724 }, 00:18:33.724 { 00:18:33.724 "name": "BaseBdev2", 00:18:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.724 "is_configured": false, 00:18:33.724 "data_offset": 0, 00:18:33.724 "data_size": 0 00:18:33.724 } 00:18:33.724 ] 00:18:33.724 }' 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.724 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.290 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.290 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.290 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 [2024-10-07 07:39:33.571858] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.291 [2024-10-07 07:39:33.571899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 [2024-10-07 07:39:33.579855] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.291 [2024-10-07 07:39:33.580014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.291 [2024-10-07 07:39:33.580035] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.291 [2024-10-07 07:39:33.580052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 [2024-10-07 07:39:33.640343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.291 BaseBdev1 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 [ 00:18:34.291 { 00:18:34.291 "name": "BaseBdev1", 00:18:34.291 "aliases": [ 00:18:34.291 "e9dc01cd-06f4-44b9-8677-69cb4645100a" 00:18:34.291 ], 00:18:34.291 "product_name": "Malloc disk", 00:18:34.291 "block_size": 512, 00:18:34.291 "num_blocks": 65536, 00:18:34.291 "uuid": "e9dc01cd-06f4-44b9-8677-69cb4645100a", 00:18:34.291 "assigned_rate_limits": { 00:18:34.291 "rw_ios_per_sec": 0, 00:18:34.291 "rw_mbytes_per_sec": 0, 00:18:34.291 "r_mbytes_per_sec": 0, 00:18:34.291 "w_mbytes_per_sec": 0 00:18:34.291 }, 00:18:34.291 "claimed": true, 00:18:34.291 "claim_type": "exclusive_write", 00:18:34.291 "zoned": false, 00:18:34.291 "supported_io_types": { 00:18:34.291 "read": true, 00:18:34.291 "write": true, 00:18:34.291 "unmap": true, 00:18:34.291 "flush": true, 00:18:34.291 "reset": true, 00:18:34.291 "nvme_admin": false, 00:18:34.291 "nvme_io": false, 00:18:34.291 "nvme_io_md": false, 00:18:34.291 "write_zeroes": true, 00:18:34.291 "zcopy": true, 00:18:34.291 "get_zone_info": false, 00:18:34.291 "zone_management": false, 00:18:34.291 "zone_append": false, 00:18:34.291 "compare": false, 00:18:34.291 "compare_and_write": false, 00:18:34.291 "abort": true, 00:18:34.291 "seek_hole": false, 00:18:34.291 "seek_data": false, 00:18:34.291 "copy": true, 00:18:34.291 "nvme_iov_md": false 00:18:34.291 }, 00:18:34.291 "memory_domains": [ 00:18:34.291 { 00:18:34.291 "dma_device_id": "system", 00:18:34.291 "dma_device_type": 1 00:18:34.291 }, 00:18:34.291 { 00:18:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.291 "dma_device_type": 2 00:18:34.291 } 00:18:34.291 ], 00:18:34.291 "driver_specific": {} 00:18:34.291 } 00:18:34.291 ] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.291 "name": "Existed_Raid", 00:18:34.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.291 "strip_size_kb": 64, 00:18:34.291 "state": "configuring", 00:18:34.291 "raid_level": "raid0", 00:18:34.291 "superblock": false, 00:18:34.291 "num_base_bdevs": 2, 00:18:34.291 "num_base_bdevs_discovered": 1, 00:18:34.291 "num_base_bdevs_operational": 2, 00:18:34.291 "base_bdevs_list": [ 00:18:34.291 { 00:18:34.291 "name": "BaseBdev1", 00:18:34.291 "uuid": "e9dc01cd-06f4-44b9-8677-69cb4645100a", 00:18:34.291 "is_configured": true, 00:18:34.291 "data_offset": 0, 00:18:34.291 "data_size": 65536 00:18:34.291 }, 00:18:34.291 { 00:18:34.291 "name": "BaseBdev2", 00:18:34.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.291 "is_configured": false, 00:18:34.291 "data_offset": 0, 00:18:34.291 "data_size": 0 00:18:34.291 } 00:18:34.291 ] 00:18:34.291 }' 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.291 07:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 [2024-10-07 07:39:34.172576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.858 [2024-10-07 07:39:34.172635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 [2024-10-07 07:39:34.180609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.858 [2024-10-07 07:39:34.183057] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.858 [2024-10-07 07:39:34.183202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.858 "name": "Existed_Raid", 00:18:34.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.858 "strip_size_kb": 64, 00:18:34.858 "state": "configuring", 00:18:34.858 "raid_level": "raid0", 00:18:34.858 "superblock": false, 00:18:34.858 "num_base_bdevs": 2, 00:18:34.858 "num_base_bdevs_discovered": 1, 00:18:34.858 "num_base_bdevs_operational": 2, 00:18:34.858 "base_bdevs_list": [ 00:18:34.858 { 00:18:34.858 "name": "BaseBdev1", 00:18:34.858 "uuid": "e9dc01cd-06f4-44b9-8677-69cb4645100a", 00:18:34.858 "is_configured": true, 00:18:34.858 "data_offset": 0, 00:18:34.858 "data_size": 65536 00:18:34.858 }, 00:18:34.858 { 00:18:34.858 "name": "BaseBdev2", 00:18:34.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.858 "is_configured": false, 00:18:34.858 "data_offset": 0, 00:18:34.858 "data_size": 0 00:18:34.858 } 00:18:34.858 ] 00:18:34.858 }' 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.858 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.117 [2024-10-07 07:39:34.670346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.117 [2024-10-07 07:39:34.670623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:35.117 [2024-10-07 07:39:34.670672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:35.117 [2024-10-07 07:39:34.671079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:35.117 [2024-10-07 07:39:34.671356] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:35.117 [2024-10-07 07:39:34.671476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:18:35.117 id_bdev 0x617000007e80 00:18:35.117 [2024-10-07 07:39:34.671901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:18:35.117 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 [ 00:18:35.376 { 00:18:35.376 "name": "BaseBdev2", 00:18:35.376 "aliases": [ 00:18:35.376 "490492b3-67fe-41d7-b3f6-7dc8be1260d5" 00:18:35.376 ], 00:18:35.376 "product_name": "Malloc disk", 00:18:35.376 "block_size": 512, 00:18:35.376 "num_blocks": 65536, 00:18:35.376 "uuid": "490492b3-67fe-41d7-b3f6-7dc8be1260d5", 00:18:35.376 "assigned_rate_limits": { 00:18:35.376 "rw_ios_per_sec": 0, 00:18:35.376 "rw_mbytes_per_sec": 0, 00:18:35.376 "r_mbytes_per_sec": 0, 00:18:35.376 "w_mbytes_per_sec": 0 00:18:35.376 }, 00:18:35.376 "claimed": true, 00:18:35.376 "claim_type": "exclusive_write", 00:18:35.376 "zoned": false, 00:18:35.376 "supported_io_types": { 00:18:35.376 "read": true, 00:18:35.376 "write": true, 00:18:35.376 "unmap": true, 00:18:35.376 "flush": true, 00:18:35.376 "reset": true, 00:18:35.376 "nvme_admin": false, 00:18:35.376 "nvme_io": false, 00:18:35.376 "nvme_io_md": false, 00:18:35.376 "write_zeroes": true, 00:18:35.376 "zcopy": true, 00:18:35.376 "get_zone_info": false, 00:18:35.376 "zone_management": false, 00:18:35.376 "zone_append": false, 00:18:35.376 "compare": false, 00:18:35.376 "compare_and_write": false, 00:18:35.376 "abort": true, 00:18:35.376 "seek_hole": false, 00:18:35.376 "seek_data": false, 00:18:35.376 "copy": true, 00:18:35.376 "nvme_iov_md": false 00:18:35.376 }, 00:18:35.376 "memory_domains": [ 00:18:35.376 { 00:18:35.376 "dma_device_id": "system", 00:18:35.376 "dma_device_type": 1 00:18:35.376 }, 00:18:35.376 { 00:18:35.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.376 "dma_device_type": 2 00:18:35.376 } 00:18:35.376 ], 00:18:35.376 "driver_specific": {} 00:18:35.376 } 00:18:35.376 ] 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.376 "name": "Existed_Raid", 00:18:35.376 "uuid": "f5a8150f-3807-44af-9ca7-6dbb9a5afab0", 00:18:35.376 "strip_size_kb": 64, 00:18:35.376 "state": "online", 00:18:35.376 "raid_level": "raid0", 00:18:35.376 "superblock": false, 00:18:35.376 "num_base_bdevs": 2, 00:18:35.376 "num_base_bdevs_discovered": 2, 00:18:35.376 "num_base_bdevs_operational": 2, 00:18:35.376 "base_bdevs_list": [ 00:18:35.376 { 00:18:35.376 "name": "BaseBdev1", 00:18:35.376 "uuid": "e9dc01cd-06f4-44b9-8677-69cb4645100a", 00:18:35.376 "is_configured": true, 00:18:35.376 "data_offset": 0, 00:18:35.376 "data_size": 65536 00:18:35.376 }, 00:18:35.376 { 00:18:35.376 "name": "BaseBdev2", 00:18:35.376 "uuid": "490492b3-67fe-41d7-b3f6-7dc8be1260d5", 00:18:35.376 "is_configured": true, 00:18:35.376 "data_offset": 0, 00:18:35.376 "data_size": 65536 00:18:35.376 } 00:18:35.376 ] 00:18:35.376 }' 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.376 07:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.635 [2024-10-07 07:39:35.170868] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.635 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.894 "name": "Existed_Raid", 00:18:35.894 "aliases": [ 00:18:35.894 "f5a8150f-3807-44af-9ca7-6dbb9a5afab0" 00:18:35.894 ], 00:18:35.894 "product_name": "Raid Volume", 00:18:35.894 "block_size": 512, 00:18:35.894 "num_blocks": 131072, 00:18:35.894 "uuid": "f5a8150f-3807-44af-9ca7-6dbb9a5afab0", 00:18:35.894 "assigned_rate_limits": { 00:18:35.894 "rw_ios_per_sec": 0, 00:18:35.894 "rw_mbytes_per_sec": 0, 00:18:35.894 "r_mbytes_per_sec": 0, 00:18:35.894 "w_mbytes_per_sec": 0 00:18:35.894 }, 00:18:35.894 "claimed": false, 00:18:35.894 "zoned": false, 00:18:35.894 "supported_io_types": { 00:18:35.894 "read": true, 00:18:35.894 "write": true, 00:18:35.894 "unmap": true, 00:18:35.894 "flush": true, 00:18:35.894 "reset": true, 00:18:35.894 "nvme_admin": false, 00:18:35.894 "nvme_io": false, 00:18:35.894 "nvme_io_md": false, 00:18:35.894 "write_zeroes": true, 00:18:35.894 "zcopy": false, 00:18:35.894 "get_zone_info": false, 00:18:35.894 "zone_management": false, 00:18:35.894 "zone_append": false, 00:18:35.894 "compare": false, 00:18:35.894 "compare_and_write": false, 00:18:35.894 "abort": false, 00:18:35.894 "seek_hole": false, 00:18:35.894 "seek_data": false, 00:18:35.894 "copy": false, 00:18:35.894 "nvme_iov_md": false 00:18:35.894 }, 00:18:35.894 "memory_domains": [ 00:18:35.894 { 00:18:35.894 "dma_device_id": "system", 00:18:35.894 "dma_device_type": 1 00:18:35.894 }, 00:18:35.894 { 00:18:35.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.894 "dma_device_type": 2 00:18:35.894 }, 00:18:35.894 { 00:18:35.894 "dma_device_id": "system", 00:18:35.894 "dma_device_type": 1 00:18:35.894 }, 00:18:35.894 { 00:18:35.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.894 "dma_device_type": 2 00:18:35.894 } 00:18:35.894 ], 00:18:35.894 "driver_specific": { 00:18:35.894 "raid": { 00:18:35.894 "uuid": "f5a8150f-3807-44af-9ca7-6dbb9a5afab0", 00:18:35.894 "strip_size_kb": 64, 00:18:35.894 "state": "online", 00:18:35.894 "raid_level": "raid0", 00:18:35.894 "superblock": false, 00:18:35.894 "num_base_bdevs": 2, 00:18:35.894 "num_base_bdevs_discovered": 2, 00:18:35.894 "num_base_bdevs_operational": 2, 00:18:35.894 "base_bdevs_list": [ 00:18:35.894 { 00:18:35.894 "name": "BaseBdev1", 00:18:35.894 "uuid": "e9dc01cd-06f4-44b9-8677-69cb4645100a", 00:18:35.894 "is_configured": true, 00:18:35.894 "data_offset": 0, 00:18:35.894 "data_size": 65536 00:18:35.894 }, 00:18:35.894 { 00:18:35.894 "name": "BaseBdev2", 00:18:35.894 "uuid": "490492b3-67fe-41d7-b3f6-7dc8be1260d5", 00:18:35.894 "is_configured": true, 00:18:35.894 "data_offset": 0, 00:18:35.894 "data_size": 65536 00:18:35.894 } 00:18:35.894 ] 00:18:35.894 } 00:18:35.894 } 00:18:35.894 }' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:35.894 BaseBdev2' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:35.894 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 [2024-10-07 07:39:35.394659] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.894 [2024-10-07 07:39:35.394728] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.894 [2024-10-07 07:39:35.394806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.153 "name": "Existed_Raid", 00:18:36.153 "uuid": "f5a8150f-3807-44af-9ca7-6dbb9a5afab0", 00:18:36.153 "strip_size_kb": 64, 00:18:36.153 "state": "offline", 00:18:36.153 "raid_level": "raid0", 00:18:36.153 "superblock": false, 00:18:36.153 "num_base_bdevs": 2, 00:18:36.153 "num_base_bdevs_discovered": 1, 00:18:36.153 "num_base_bdevs_operational": 1, 00:18:36.153 "base_bdevs_list": [ 00:18:36.153 { 00:18:36.153 "name": null, 00:18:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.153 "is_configured": false, 00:18:36.153 "data_offset": 0, 00:18:36.153 "data_size": 65536 00:18:36.153 }, 00:18:36.153 { 00:18:36.153 "name": "BaseBdev2", 00:18:36.153 "uuid": "490492b3-67fe-41d7-b3f6-7dc8be1260d5", 00:18:36.153 "is_configured": true, 00:18:36.153 "data_offset": 0, 00:18:36.153 "data_size": 65536 00:18:36.153 } 00:18:36.153 ] 00:18:36.153 }' 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.153 07:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:36.719 07:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 [2024-10-07 07:39:36.051223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:36.719 [2024-10-07 07:39:36.051288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60699 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 60699 ']' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 60699 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60699 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:36.719 killing process with pid 60699 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60699' 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 60699 00:18:36.719 [2024-10-07 07:39:36.263001] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.719 07:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 60699 00:18:36.978 [2024-10-07 07:39:36.280147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.357 ************************************ 00:18:38.357 END TEST raid_state_function_test 00:18:38.357 ************************************ 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:38.357 00:18:38.357 real 0m5.633s 00:18:38.357 user 0m8.108s 00:18:38.357 sys 0m0.917s 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.357 07:39:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:18:38.357 07:39:37 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:18:38.357 07:39:37 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:38.357 07:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.357 ************************************ 00:18:38.357 START TEST raid_state_function_test_sb 00:18:38.357 ************************************ 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 2 true 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60952 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60952' 00:18:38.357 Process raid pid: 60952 00:18:38.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60952 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 60952 ']' 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.357 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:38.358 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.358 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:38.358 07:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.358 [2024-10-07 07:39:37.823329] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:38.358 [2024-10-07 07:39:37.823501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.627 [2024-10-07 07:39:38.004040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.886 [2024-10-07 07:39:38.245909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.144 [2024-10-07 07:39:38.471835] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.144 [2024-10-07 07:39:38.471881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.403 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:39.403 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:18:39.403 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:39.403 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.403 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 [2024-10-07 07:39:38.822803] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.404 [2024-10-07 07:39:38.823010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.404 [2024-10-07 07:39:38.823051] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.404 [2024-10-07 07:39:38.823070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.404 "name": "Existed_Raid", 00:18:39.404 "uuid": "eaa4da58-a568-42b0-bc9f-9328c0e495ab", 00:18:39.404 "strip_size_kb": 64, 00:18:39.404 "state": "configuring", 00:18:39.404 "raid_level": "raid0", 00:18:39.404 "superblock": true, 00:18:39.404 "num_base_bdevs": 2, 00:18:39.404 "num_base_bdevs_discovered": 0, 00:18:39.404 "num_base_bdevs_operational": 2, 00:18:39.404 "base_bdevs_list": [ 00:18:39.404 { 00:18:39.404 "name": "BaseBdev1", 00:18:39.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.404 "is_configured": false, 00:18:39.404 "data_offset": 0, 00:18:39.404 "data_size": 0 00:18:39.404 }, 00:18:39.404 { 00:18:39.404 "name": "BaseBdev2", 00:18:39.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.404 "is_configured": false, 00:18:39.404 "data_offset": 0, 00:18:39.404 "data_size": 0 00:18:39.404 } 00:18:39.404 ] 00:18:39.404 }' 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.404 07:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.970 [2024-10-07 07:39:39.270789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.970 [2024-10-07 07:39:39.270832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.970 [2024-10-07 07:39:39.278820] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.970 [2024-10-07 07:39:39.279019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.970 [2024-10-07 07:39:39.279047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.970 [2024-10-07 07:39:39.279067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.970 [2024-10-07 07:39:39.338193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.970 BaseBdev1 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:39.970 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.971 [ 00:18:39.971 { 00:18:39.971 "name": "BaseBdev1", 00:18:39.971 "aliases": [ 00:18:39.971 "6182ab9b-10a2-4c85-81ce-c1b7ae652750" 00:18:39.971 ], 00:18:39.971 "product_name": "Malloc disk", 00:18:39.971 "block_size": 512, 00:18:39.971 "num_blocks": 65536, 00:18:39.971 "uuid": "6182ab9b-10a2-4c85-81ce-c1b7ae652750", 00:18:39.971 "assigned_rate_limits": { 00:18:39.971 "rw_ios_per_sec": 0, 00:18:39.971 "rw_mbytes_per_sec": 0, 00:18:39.971 "r_mbytes_per_sec": 0, 00:18:39.971 "w_mbytes_per_sec": 0 00:18:39.971 }, 00:18:39.971 "claimed": true, 00:18:39.971 "claim_type": "exclusive_write", 00:18:39.971 "zoned": false, 00:18:39.971 "supported_io_types": { 00:18:39.971 "read": true, 00:18:39.971 "write": true, 00:18:39.971 "unmap": true, 00:18:39.971 "flush": true, 00:18:39.971 "reset": true, 00:18:39.971 "nvme_admin": false, 00:18:39.971 "nvme_io": false, 00:18:39.971 "nvme_io_md": false, 00:18:39.971 "write_zeroes": true, 00:18:39.971 "zcopy": true, 00:18:39.971 "get_zone_info": false, 00:18:39.971 "zone_management": false, 00:18:39.971 "zone_append": false, 00:18:39.971 "compare": false, 00:18:39.971 "compare_and_write": false, 00:18:39.971 "abort": true, 00:18:39.971 "seek_hole": false, 00:18:39.971 "seek_data": false, 00:18:39.971 "copy": true, 00:18:39.971 "nvme_iov_md": false 00:18:39.971 }, 00:18:39.971 "memory_domains": [ 00:18:39.971 { 00:18:39.971 "dma_device_id": "system", 00:18:39.971 "dma_device_type": 1 00:18:39.971 }, 00:18:39.971 { 00:18:39.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.971 "dma_device_type": 2 00:18:39.971 } 00:18:39.971 ], 00:18:39.971 "driver_specific": {} 00:18:39.971 } 00:18:39.971 ] 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.971 "name": "Existed_Raid", 00:18:39.971 "uuid": "e21173c1-972a-43da-a3a5-64490e136265", 00:18:39.971 "strip_size_kb": 64, 00:18:39.971 "state": "configuring", 00:18:39.971 "raid_level": "raid0", 00:18:39.971 "superblock": true, 00:18:39.971 "num_base_bdevs": 2, 00:18:39.971 "num_base_bdevs_discovered": 1, 00:18:39.971 "num_base_bdevs_operational": 2, 00:18:39.971 "base_bdevs_list": [ 00:18:39.971 { 00:18:39.971 "name": "BaseBdev1", 00:18:39.971 "uuid": "6182ab9b-10a2-4c85-81ce-c1b7ae652750", 00:18:39.971 "is_configured": true, 00:18:39.971 "data_offset": 2048, 00:18:39.971 "data_size": 63488 00:18:39.971 }, 00:18:39.971 { 00:18:39.971 "name": "BaseBdev2", 00:18:39.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.971 "is_configured": false, 00:18:39.971 "data_offset": 0, 00:18:39.971 "data_size": 0 00:18:39.971 } 00:18:39.971 ] 00:18:39.971 }' 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.971 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.537 [2024-10-07 07:39:39.858431] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.537 [2024-10-07 07:39:39.858509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.537 [2024-10-07 07:39:39.866484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.537 [2024-10-07 07:39:39.869234] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.537 [2024-10-07 07:39:39.869475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.537 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.538 "name": "Existed_Raid", 00:18:40.538 "uuid": "6c4a6339-444e-4441-ba0a-4220dd801e74", 00:18:40.538 "strip_size_kb": 64, 00:18:40.538 "state": "configuring", 00:18:40.538 "raid_level": "raid0", 00:18:40.538 "superblock": true, 00:18:40.538 "num_base_bdevs": 2, 00:18:40.538 "num_base_bdevs_discovered": 1, 00:18:40.538 "num_base_bdevs_operational": 2, 00:18:40.538 "base_bdevs_list": [ 00:18:40.538 { 00:18:40.538 "name": "BaseBdev1", 00:18:40.538 "uuid": "6182ab9b-10a2-4c85-81ce-c1b7ae652750", 00:18:40.538 "is_configured": true, 00:18:40.538 "data_offset": 2048, 00:18:40.538 "data_size": 63488 00:18:40.538 }, 00:18:40.538 { 00:18:40.538 "name": "BaseBdev2", 00:18:40.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.538 "is_configured": false, 00:18:40.538 "data_offset": 0, 00:18:40.538 "data_size": 0 00:18:40.538 } 00:18:40.538 ] 00:18:40.538 }' 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.538 07:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:40.796 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:40.796 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.054 [2024-10-07 07:39:40.373745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.054 [2024-10-07 07:39:40.374038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.054 [2024-10-07 07:39:40.374055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:41.054 [2024-10-07 07:39:40.374337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:41.054 [2024-10-07 07:39:40.374481] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.054 [2024-10-07 07:39:40.374494] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:41.054 [2024-10-07 07:39:40.374633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.054 BaseBdev2 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.054 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.054 [ 00:18:41.054 { 00:18:41.054 "name": "BaseBdev2", 00:18:41.054 "aliases": [ 00:18:41.054 "59b46a4b-b73a-48ee-93b0-55cc5439cd49" 00:18:41.054 ], 00:18:41.054 "product_name": "Malloc disk", 00:18:41.054 "block_size": 512, 00:18:41.054 "num_blocks": 65536, 00:18:41.054 "uuid": "59b46a4b-b73a-48ee-93b0-55cc5439cd49", 00:18:41.054 "assigned_rate_limits": { 00:18:41.054 "rw_ios_per_sec": 0, 00:18:41.054 "rw_mbytes_per_sec": 0, 00:18:41.054 "r_mbytes_per_sec": 0, 00:18:41.054 "w_mbytes_per_sec": 0 00:18:41.054 }, 00:18:41.054 "claimed": true, 00:18:41.054 "claim_type": "exclusive_write", 00:18:41.054 "zoned": false, 00:18:41.054 "supported_io_types": { 00:18:41.054 "read": true, 00:18:41.054 "write": true, 00:18:41.054 "unmap": true, 00:18:41.054 "flush": true, 00:18:41.054 "reset": true, 00:18:41.054 "nvme_admin": false, 00:18:41.054 "nvme_io": false, 00:18:41.054 "nvme_io_md": false, 00:18:41.054 "write_zeroes": true, 00:18:41.054 "zcopy": true, 00:18:41.054 "get_zone_info": false, 00:18:41.054 "zone_management": false, 00:18:41.054 "zone_append": false, 00:18:41.054 "compare": false, 00:18:41.054 "compare_and_write": false, 00:18:41.054 "abort": true, 00:18:41.054 "seek_hole": false, 00:18:41.054 "seek_data": false, 00:18:41.054 "copy": true, 00:18:41.054 "nvme_iov_md": false 00:18:41.054 }, 00:18:41.054 "memory_domains": [ 00:18:41.054 { 00:18:41.054 "dma_device_id": "system", 00:18:41.054 "dma_device_type": 1 00:18:41.055 }, 00:18:41.055 { 00:18:41.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.055 "dma_device_type": 2 00:18:41.055 } 00:18:41.055 ], 00:18:41.055 "driver_specific": {} 00:18:41.055 } 00:18:41.055 ] 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.055 "name": "Existed_Raid", 00:18:41.055 "uuid": "6c4a6339-444e-4441-ba0a-4220dd801e74", 00:18:41.055 "strip_size_kb": 64, 00:18:41.055 "state": "online", 00:18:41.055 "raid_level": "raid0", 00:18:41.055 "superblock": true, 00:18:41.055 "num_base_bdevs": 2, 00:18:41.055 "num_base_bdevs_discovered": 2, 00:18:41.055 "num_base_bdevs_operational": 2, 00:18:41.055 "base_bdevs_list": [ 00:18:41.055 { 00:18:41.055 "name": "BaseBdev1", 00:18:41.055 "uuid": "6182ab9b-10a2-4c85-81ce-c1b7ae652750", 00:18:41.055 "is_configured": true, 00:18:41.055 "data_offset": 2048, 00:18:41.055 "data_size": 63488 00:18:41.055 }, 00:18:41.055 { 00:18:41.055 "name": "BaseBdev2", 00:18:41.055 "uuid": "59b46a4b-b73a-48ee-93b0-55cc5439cd49", 00:18:41.055 "is_configured": true, 00:18:41.055 "data_offset": 2048, 00:18:41.055 "data_size": 63488 00:18:41.055 } 00:18:41.055 ] 00:18:41.055 }' 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.055 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.314 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.573 [2024-10-07 07:39:40.878232] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.573 "name": "Existed_Raid", 00:18:41.573 "aliases": [ 00:18:41.573 "6c4a6339-444e-4441-ba0a-4220dd801e74" 00:18:41.573 ], 00:18:41.573 "product_name": "Raid Volume", 00:18:41.573 "block_size": 512, 00:18:41.573 "num_blocks": 126976, 00:18:41.573 "uuid": "6c4a6339-444e-4441-ba0a-4220dd801e74", 00:18:41.573 "assigned_rate_limits": { 00:18:41.573 "rw_ios_per_sec": 0, 00:18:41.573 "rw_mbytes_per_sec": 0, 00:18:41.573 "r_mbytes_per_sec": 0, 00:18:41.573 "w_mbytes_per_sec": 0 00:18:41.573 }, 00:18:41.573 "claimed": false, 00:18:41.573 "zoned": false, 00:18:41.573 "supported_io_types": { 00:18:41.573 "read": true, 00:18:41.573 "write": true, 00:18:41.573 "unmap": true, 00:18:41.573 "flush": true, 00:18:41.573 "reset": true, 00:18:41.573 "nvme_admin": false, 00:18:41.573 "nvme_io": false, 00:18:41.573 "nvme_io_md": false, 00:18:41.573 "write_zeroes": true, 00:18:41.573 "zcopy": false, 00:18:41.573 "get_zone_info": false, 00:18:41.573 "zone_management": false, 00:18:41.573 "zone_append": false, 00:18:41.573 "compare": false, 00:18:41.573 "compare_and_write": false, 00:18:41.573 "abort": false, 00:18:41.573 "seek_hole": false, 00:18:41.573 "seek_data": false, 00:18:41.573 "copy": false, 00:18:41.573 "nvme_iov_md": false 00:18:41.573 }, 00:18:41.573 "memory_domains": [ 00:18:41.573 { 00:18:41.573 "dma_device_id": "system", 00:18:41.573 "dma_device_type": 1 00:18:41.573 }, 00:18:41.573 { 00:18:41.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.573 "dma_device_type": 2 00:18:41.573 }, 00:18:41.573 { 00:18:41.573 "dma_device_id": "system", 00:18:41.573 "dma_device_type": 1 00:18:41.573 }, 00:18:41.573 { 00:18:41.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.573 "dma_device_type": 2 00:18:41.573 } 00:18:41.573 ], 00:18:41.573 "driver_specific": { 00:18:41.573 "raid": { 00:18:41.573 "uuid": "6c4a6339-444e-4441-ba0a-4220dd801e74", 00:18:41.573 "strip_size_kb": 64, 00:18:41.573 "state": "online", 00:18:41.573 "raid_level": "raid0", 00:18:41.573 "superblock": true, 00:18:41.573 "num_base_bdevs": 2, 00:18:41.573 "num_base_bdevs_discovered": 2, 00:18:41.573 "num_base_bdevs_operational": 2, 00:18:41.573 "base_bdevs_list": [ 00:18:41.573 { 00:18:41.573 "name": "BaseBdev1", 00:18:41.573 "uuid": "6182ab9b-10a2-4c85-81ce-c1b7ae652750", 00:18:41.573 "is_configured": true, 00:18:41.573 "data_offset": 2048, 00:18:41.573 "data_size": 63488 00:18:41.573 }, 00:18:41.573 { 00:18:41.573 "name": "BaseBdev2", 00:18:41.573 "uuid": "59b46a4b-b73a-48ee-93b0-55cc5439cd49", 00:18:41.573 "is_configured": true, 00:18:41.573 "data_offset": 2048, 00:18:41.573 "data_size": 63488 00:18:41.573 } 00:18:41.573 ] 00:18:41.573 } 00:18:41.573 } 00:18:41.573 }' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:41.573 BaseBdev2' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.573 07:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.573 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.574 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.574 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.574 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:41.574 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.574 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.574 [2024-10-07 07:39:41.074037] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.574 [2024-10-07 07:39:41.074194] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.574 [2024-10-07 07:39:41.074356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.832 "name": "Existed_Raid", 00:18:41.832 "uuid": "6c4a6339-444e-4441-ba0a-4220dd801e74", 00:18:41.832 "strip_size_kb": 64, 00:18:41.832 "state": "offline", 00:18:41.832 "raid_level": "raid0", 00:18:41.832 "superblock": true, 00:18:41.832 "num_base_bdevs": 2, 00:18:41.832 "num_base_bdevs_discovered": 1, 00:18:41.832 "num_base_bdevs_operational": 1, 00:18:41.832 "base_bdevs_list": [ 00:18:41.832 { 00:18:41.832 "name": null, 00:18:41.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.832 "is_configured": false, 00:18:41.832 "data_offset": 0, 00:18:41.832 "data_size": 63488 00:18:41.832 }, 00:18:41.832 { 00:18:41.832 "name": "BaseBdev2", 00:18:41.832 "uuid": "59b46a4b-b73a-48ee-93b0-55cc5439cd49", 00:18:41.832 "is_configured": true, 00:18:41.832 "data_offset": 2048, 00:18:41.832 "data_size": 63488 00:18:41.832 } 00:18:41.832 ] 00:18:41.832 }' 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.832 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:42.126 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.384 [2024-10-07 07:39:41.726096] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.384 [2024-10-07 07:39:41.726152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60952 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 60952 ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 60952 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 60952 00:18:42.384 killing process with pid 60952 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 60952' 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 60952 00:18:42.384 [2024-10-07 07:39:41.915716] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:42.384 07:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 60952 00:18:42.384 [2024-10-07 07:39:41.933808] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.759 07:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:43.759 00:18:43.759 real 0m5.566s 00:18:43.759 user 0m8.060s 00:18:43.759 sys 0m0.892s 00:18:43.759 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:43.759 07:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.759 ************************************ 00:18:43.759 END TEST raid_state_function_test_sb 00:18:43.759 ************************************ 00:18:43.759 07:39:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:18:43.759 07:39:43 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:18:43.759 07:39:43 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:43.759 07:39:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.759 ************************************ 00:18:43.759 START TEST raid_superblock_test 00:18:43.759 ************************************ 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid0 2 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:43.759 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61216 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61216 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 61216 ']' 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:43.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:43.760 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.018 [2024-10-07 07:39:43.399722] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:44.018 [2024-10-07 07:39:43.400540] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:18:44.277 [2024-10-07 07:39:43.589397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.277 [2024-10-07 07:39:43.811323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.536 [2024-10-07 07:39:44.026253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.536 [2024-10-07 07:39:44.026317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.103 malloc1 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.103 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.103 [2024-10-07 07:39:44.455781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.103 [2024-10-07 07:39:44.455849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.103 [2024-10-07 07:39:44.455879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:45.103 [2024-10-07 07:39:44.455896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.104 [2024-10-07 07:39:44.458504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.104 [2024-10-07 07:39:44.458549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.104 pt1 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.104 malloc2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.104 [2024-10-07 07:39:44.521464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.104 [2024-10-07 07:39:44.521544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.104 [2024-10-07 07:39:44.521574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:45.104 [2024-10-07 07:39:44.521588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.104 [2024-10-07 07:39:44.524190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.104 [2024-10-07 07:39:44.524230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.104 pt2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.104 [2024-10-07 07:39:44.533560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.104 [2024-10-07 07:39:44.535745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.104 [2024-10-07 07:39:44.535914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:45.104 [2024-10-07 07:39:44.535928] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:45.104 [2024-10-07 07:39:44.536248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:45.104 [2024-10-07 07:39:44.536414] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:45.104 [2024-10-07 07:39:44.536435] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:45.104 [2024-10-07 07:39:44.536637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.104 "name": "raid_bdev1", 00:18:45.104 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:45.104 "strip_size_kb": 64, 00:18:45.104 "state": "online", 00:18:45.104 "raid_level": "raid0", 00:18:45.104 "superblock": true, 00:18:45.104 "num_base_bdevs": 2, 00:18:45.104 "num_base_bdevs_discovered": 2, 00:18:45.104 "num_base_bdevs_operational": 2, 00:18:45.104 "base_bdevs_list": [ 00:18:45.104 { 00:18:45.104 "name": "pt1", 00:18:45.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.104 "is_configured": true, 00:18:45.104 "data_offset": 2048, 00:18:45.104 "data_size": 63488 00:18:45.104 }, 00:18:45.104 { 00:18:45.104 "name": "pt2", 00:18:45.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.104 "is_configured": true, 00:18:45.104 "data_offset": 2048, 00:18:45.104 "data_size": 63488 00:18:45.104 } 00:18:45.104 ] 00:18:45.104 }' 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.104 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 [2024-10-07 07:39:44.981959] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.683 "name": "raid_bdev1", 00:18:45.683 "aliases": [ 00:18:45.683 "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55" 00:18:45.683 ], 00:18:45.683 "product_name": "Raid Volume", 00:18:45.683 "block_size": 512, 00:18:45.683 "num_blocks": 126976, 00:18:45.683 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:45.683 "assigned_rate_limits": { 00:18:45.683 "rw_ios_per_sec": 0, 00:18:45.683 "rw_mbytes_per_sec": 0, 00:18:45.683 "r_mbytes_per_sec": 0, 00:18:45.683 "w_mbytes_per_sec": 0 00:18:45.683 }, 00:18:45.683 "claimed": false, 00:18:45.683 "zoned": false, 00:18:45.683 "supported_io_types": { 00:18:45.683 "read": true, 00:18:45.683 "write": true, 00:18:45.683 "unmap": true, 00:18:45.683 "flush": true, 00:18:45.683 "reset": true, 00:18:45.683 "nvme_admin": false, 00:18:45.683 "nvme_io": false, 00:18:45.683 "nvme_io_md": false, 00:18:45.683 "write_zeroes": true, 00:18:45.683 "zcopy": false, 00:18:45.683 "get_zone_info": false, 00:18:45.683 "zone_management": false, 00:18:45.683 "zone_append": false, 00:18:45.683 "compare": false, 00:18:45.683 "compare_and_write": false, 00:18:45.683 "abort": false, 00:18:45.683 "seek_hole": false, 00:18:45.683 "seek_data": false, 00:18:45.683 "copy": false, 00:18:45.683 "nvme_iov_md": false 00:18:45.683 }, 00:18:45.683 "memory_domains": [ 00:18:45.683 { 00:18:45.683 "dma_device_id": "system", 00:18:45.683 "dma_device_type": 1 00:18:45.683 }, 00:18:45.683 { 00:18:45.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.683 "dma_device_type": 2 00:18:45.683 }, 00:18:45.683 { 00:18:45.683 "dma_device_id": "system", 00:18:45.683 "dma_device_type": 1 00:18:45.683 }, 00:18:45.683 { 00:18:45.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.683 "dma_device_type": 2 00:18:45.683 } 00:18:45.683 ], 00:18:45.683 "driver_specific": { 00:18:45.683 "raid": { 00:18:45.683 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:45.683 "strip_size_kb": 64, 00:18:45.683 "state": "online", 00:18:45.683 "raid_level": "raid0", 00:18:45.683 "superblock": true, 00:18:45.683 "num_base_bdevs": 2, 00:18:45.683 "num_base_bdevs_discovered": 2, 00:18:45.683 "num_base_bdevs_operational": 2, 00:18:45.683 "base_bdevs_list": [ 00:18:45.683 { 00:18:45.683 "name": "pt1", 00:18:45.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.683 "is_configured": true, 00:18:45.683 "data_offset": 2048, 00:18:45.683 "data_size": 63488 00:18:45.683 }, 00:18:45.683 { 00:18:45.683 "name": "pt2", 00:18:45.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.683 "is_configured": true, 00:18:45.683 "data_offset": 2048, 00:18:45.683 "data_size": 63488 00:18:45.683 } 00:18:45.683 ] 00:18:45.683 } 00:18:45.683 } 00:18:45.683 }' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:45.683 pt2' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:45.683 [2024-10-07 07:39:45.173946] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55 ']' 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 [2024-10-07 07:39:45.221662] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.683 [2024-10-07 07:39:45.221722] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.683 [2024-10-07 07:39:45.221824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.683 [2024-10-07 07:39:45.221876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.683 [2024-10-07 07:39:45.221893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.944 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.944 [2024-10-07 07:39:45.333692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:45.945 [2024-10-07 07:39:45.336165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:45.945 [2024-10-07 07:39:45.336245] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:45.945 [2024-10-07 07:39:45.336306] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:45.945 [2024-10-07 07:39:45.336327] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.945 [2024-10-07 07:39:45.336342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:45.945 request: 00:18:45.945 { 00:18:45.945 "name": "raid_bdev1", 00:18:45.945 "raid_level": "raid0", 00:18:45.945 "base_bdevs": [ 00:18:45.945 "malloc1", 00:18:45.945 "malloc2" 00:18:45.945 ], 00:18:45.945 "strip_size_kb": 64, 00:18:45.945 "superblock": false, 00:18:45.945 "method": "bdev_raid_create", 00:18:45.945 "req_id": 1 00:18:45.945 } 00:18:45.945 Got JSON-RPC error response 00:18:45.945 response: 00:18:45.945 { 00:18:45.945 "code": -17, 00:18:45.945 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:45.945 } 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.945 [2024-10-07 07:39:45.389678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.945 [2024-10-07 07:39:45.389773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.945 [2024-10-07 07:39:45.389802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:45.945 [2024-10-07 07:39:45.389820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.945 pt1 00:18:45.945 [2024-10-07 07:39:45.392609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.945 [2024-10-07 07:39:45.392658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.945 [2024-10-07 07:39:45.392765] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:45.945 [2024-10-07 07:39:45.392849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.945 "name": "raid_bdev1", 00:18:45.945 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:45.945 "strip_size_kb": 64, 00:18:45.945 "state": "configuring", 00:18:45.945 "raid_level": "raid0", 00:18:45.945 "superblock": true, 00:18:45.945 "num_base_bdevs": 2, 00:18:45.945 "num_base_bdevs_discovered": 1, 00:18:45.945 "num_base_bdevs_operational": 2, 00:18:45.945 "base_bdevs_list": [ 00:18:45.945 { 00:18:45.945 "name": "pt1", 00:18:45.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.945 "is_configured": true, 00:18:45.945 "data_offset": 2048, 00:18:45.945 "data_size": 63488 00:18:45.945 }, 00:18:45.945 { 00:18:45.945 "name": null, 00:18:45.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.945 "is_configured": false, 00:18:45.945 "data_offset": 2048, 00:18:45.945 "data_size": 63488 00:18:45.945 } 00:18:45.945 ] 00:18:45.945 }' 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.945 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 [2024-10-07 07:39:45.797789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.514 [2024-10-07 07:39:45.797881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.514 [2024-10-07 07:39:45.797919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:46.514 [2024-10-07 07:39:45.797935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.514 [2024-10-07 07:39:45.798471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.514 [2024-10-07 07:39:45.798498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.514 [2024-10-07 07:39:45.798588] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:46.514 [2024-10-07 07:39:45.798615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.514 [2024-10-07 07:39:45.798753] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:46.514 [2024-10-07 07:39:45.798769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:46.514 [2024-10-07 07:39:45.799032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:46.514 [2024-10-07 07:39:45.799183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:46.514 [2024-10-07 07:39:45.799193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:46.514 [2024-10-07 07:39:45.799357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.514 pt2 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:46.514 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.515 "name": "raid_bdev1", 00:18:46.515 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:46.515 "strip_size_kb": 64, 00:18:46.515 "state": "online", 00:18:46.515 "raid_level": "raid0", 00:18:46.515 "superblock": true, 00:18:46.515 "num_base_bdevs": 2, 00:18:46.515 "num_base_bdevs_discovered": 2, 00:18:46.515 "num_base_bdevs_operational": 2, 00:18:46.515 "base_bdevs_list": [ 00:18:46.515 { 00:18:46.515 "name": "pt1", 00:18:46.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 2048, 00:18:46.515 "data_size": 63488 00:18:46.515 }, 00:18:46.515 { 00:18:46.515 "name": "pt2", 00:18:46.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 2048, 00:18:46.515 "data_size": 63488 00:18:46.515 } 00:18:46.515 ] 00:18:46.515 }' 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.515 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.774 [2024-10-07 07:39:46.246150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.774 "name": "raid_bdev1", 00:18:46.774 "aliases": [ 00:18:46.774 "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55" 00:18:46.774 ], 00:18:46.774 "product_name": "Raid Volume", 00:18:46.774 "block_size": 512, 00:18:46.774 "num_blocks": 126976, 00:18:46.774 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:46.774 "assigned_rate_limits": { 00:18:46.774 "rw_ios_per_sec": 0, 00:18:46.774 "rw_mbytes_per_sec": 0, 00:18:46.774 "r_mbytes_per_sec": 0, 00:18:46.774 "w_mbytes_per_sec": 0 00:18:46.774 }, 00:18:46.774 "claimed": false, 00:18:46.774 "zoned": false, 00:18:46.774 "supported_io_types": { 00:18:46.774 "read": true, 00:18:46.774 "write": true, 00:18:46.774 "unmap": true, 00:18:46.774 "flush": true, 00:18:46.774 "reset": true, 00:18:46.774 "nvme_admin": false, 00:18:46.774 "nvme_io": false, 00:18:46.774 "nvme_io_md": false, 00:18:46.774 "write_zeroes": true, 00:18:46.774 "zcopy": false, 00:18:46.774 "get_zone_info": false, 00:18:46.774 "zone_management": false, 00:18:46.774 "zone_append": false, 00:18:46.774 "compare": false, 00:18:46.774 "compare_and_write": false, 00:18:46.774 "abort": false, 00:18:46.774 "seek_hole": false, 00:18:46.774 "seek_data": false, 00:18:46.774 "copy": false, 00:18:46.774 "nvme_iov_md": false 00:18:46.774 }, 00:18:46.774 "memory_domains": [ 00:18:46.774 { 00:18:46.774 "dma_device_id": "system", 00:18:46.774 "dma_device_type": 1 00:18:46.774 }, 00:18:46.774 { 00:18:46.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.774 "dma_device_type": 2 00:18:46.774 }, 00:18:46.774 { 00:18:46.774 "dma_device_id": "system", 00:18:46.774 "dma_device_type": 1 00:18:46.774 }, 00:18:46.774 { 00:18:46.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.774 "dma_device_type": 2 00:18:46.774 } 00:18:46.774 ], 00:18:46.774 "driver_specific": { 00:18:46.774 "raid": { 00:18:46.774 "uuid": "1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55", 00:18:46.774 "strip_size_kb": 64, 00:18:46.774 "state": "online", 00:18:46.774 "raid_level": "raid0", 00:18:46.774 "superblock": true, 00:18:46.774 "num_base_bdevs": 2, 00:18:46.774 "num_base_bdevs_discovered": 2, 00:18:46.774 "num_base_bdevs_operational": 2, 00:18:46.774 "base_bdevs_list": [ 00:18:46.774 { 00:18:46.774 "name": "pt1", 00:18:46.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.774 "is_configured": true, 00:18:46.774 "data_offset": 2048, 00:18:46.774 "data_size": 63488 00:18:46.774 }, 00:18:46.774 { 00:18:46.774 "name": "pt2", 00:18:46.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.774 "is_configured": true, 00:18:46.774 "data_offset": 2048, 00:18:46.774 "data_size": 63488 00:18:46.774 } 00:18:46.774 ] 00:18:46.774 } 00:18:46.774 } 00:18:46.774 }' 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:46.774 pt2' 00:18:46.774 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.033 [2024-10-07 07:39:46.466220] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55 '!=' 1baaa5ff-6726-48b6-bfa9-ba6fbaeb2f55 ']' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61216 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 61216 ']' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 61216 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 61216 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:47.033 killing process with pid 61216 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 61216' 00:18:47.033 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 61216 00:18:47.034 [2024-10-07 07:39:46.542619] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.034 [2024-10-07 07:39:46.542712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.034 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 61216 00:18:47.034 [2024-10-07 07:39:46.542777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.034 [2024-10-07 07:39:46.542791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:47.292 [2024-10-07 07:39:46.765431] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.670 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:48.670 00:18:48.670 real 0m4.751s 00:18:48.670 user 0m6.617s 00:18:48.670 sys 0m0.783s 00:18:48.670 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:48.670 ************************************ 00:18:48.670 END TEST raid_superblock_test 00:18:48.670 ************************************ 00:18:48.670 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.670 07:39:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:18:48.670 07:39:48 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:18:48.670 07:39:48 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:48.670 07:39:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.670 ************************************ 00:18:48.670 START TEST raid_read_error_test 00:18:48.670 ************************************ 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 2 read 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JvCDyPSTcP 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61422 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61422 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 61422 ']' 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:48.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:48.670 07:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.931 [2024-10-07 07:39:48.260736] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:48.931 [2024-10-07 07:39:48.260940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61422 ] 00:18:48.931 [2024-10-07 07:39:48.452624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.190 [2024-10-07 07:39:48.663562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.449 [2024-10-07 07:39:48.887056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.449 [2024-10-07 07:39:48.887130] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.709 BaseBdev1_malloc 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.709 true 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.709 [2024-10-07 07:39:49.244442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:49.709 [2024-10-07 07:39:49.244506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.709 [2024-10-07 07:39:49.244528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:49.709 [2024-10-07 07:39:49.244543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.709 [2024-10-07 07:39:49.247101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.709 [2024-10-07 07:39:49.247143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.709 BaseBdev1 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.709 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 BaseBdev2_malloc 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 true 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 [2024-10-07 07:39:49.314087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:49.968 [2024-10-07 07:39:49.314151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.968 [2024-10-07 07:39:49.314180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:49.968 [2024-10-07 07:39:49.314195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.968 [2024-10-07 07:39:49.316770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.968 [2024-10-07 07:39:49.316819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:49.968 BaseBdev2 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 [2024-10-07 07:39:49.326173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.968 [2024-10-07 07:39:49.328256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.968 [2024-10-07 07:39:49.328456] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:49.968 [2024-10-07 07:39:49.328473] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:49.968 [2024-10-07 07:39:49.328748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:49.968 [2024-10-07 07:39:49.328923] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:49.968 [2024-10-07 07:39:49.328935] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:49.968 [2024-10-07 07:39:49.329105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.968 "name": "raid_bdev1", 00:18:49.968 "uuid": "2951d111-e2d3-4656-9bef-3dbd34c92d0f", 00:18:49.968 "strip_size_kb": 64, 00:18:49.968 "state": "online", 00:18:49.968 "raid_level": "raid0", 00:18:49.968 "superblock": true, 00:18:49.968 "num_base_bdevs": 2, 00:18:49.968 "num_base_bdevs_discovered": 2, 00:18:49.968 "num_base_bdevs_operational": 2, 00:18:49.968 "base_bdevs_list": [ 00:18:49.968 { 00:18:49.968 "name": "BaseBdev1", 00:18:49.968 "uuid": "91ed08a5-dfc3-5c77-9d4a-4f5094e6289b", 00:18:49.968 "is_configured": true, 00:18:49.968 "data_offset": 2048, 00:18:49.968 "data_size": 63488 00:18:49.968 }, 00:18:49.968 { 00:18:49.968 "name": "BaseBdev2", 00:18:49.968 "uuid": "d198bfb1-81d7-5f0f-8416-ecd98e94e52b", 00:18:49.968 "is_configured": true, 00:18:49.968 "data_offset": 2048, 00:18:49.968 "data_size": 63488 00:18:49.968 } 00:18:49.968 ] 00:18:49.968 }' 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.968 07:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.227 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:50.227 07:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:50.512 [2024-10-07 07:39:49.899511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.448 "name": "raid_bdev1", 00:18:51.448 "uuid": "2951d111-e2d3-4656-9bef-3dbd34c92d0f", 00:18:51.448 "strip_size_kb": 64, 00:18:51.448 "state": "online", 00:18:51.448 "raid_level": "raid0", 00:18:51.448 "superblock": true, 00:18:51.448 "num_base_bdevs": 2, 00:18:51.448 "num_base_bdevs_discovered": 2, 00:18:51.448 "num_base_bdevs_operational": 2, 00:18:51.448 "base_bdevs_list": [ 00:18:51.448 { 00:18:51.448 "name": "BaseBdev1", 00:18:51.448 "uuid": "91ed08a5-dfc3-5c77-9d4a-4f5094e6289b", 00:18:51.448 "is_configured": true, 00:18:51.448 "data_offset": 2048, 00:18:51.448 "data_size": 63488 00:18:51.448 }, 00:18:51.448 { 00:18:51.448 "name": "BaseBdev2", 00:18:51.448 "uuid": "d198bfb1-81d7-5f0f-8416-ecd98e94e52b", 00:18:51.448 "is_configured": true, 00:18:51.448 "data_offset": 2048, 00:18:51.448 "data_size": 63488 00:18:51.448 } 00:18:51.448 ] 00:18:51.448 }' 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.448 07:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.707 [2024-10-07 07:39:51.228171] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.707 [2024-10-07 07:39:51.228215] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.707 [2024-10-07 07:39:51.231136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.707 [2024-10-07 07:39:51.231201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.707 [2024-10-07 07:39:51.231237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.707 [2024-10-07 07:39:51.231252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:51.707 { 00:18:51.707 "results": [ 00:18:51.707 { 00:18:51.707 "job": "raid_bdev1", 00:18:51.707 "core_mask": "0x1", 00:18:51.707 "workload": "randrw", 00:18:51.707 "percentage": 50, 00:18:51.707 "status": "finished", 00:18:51.707 "queue_depth": 1, 00:18:51.707 "io_size": 131072, 00:18:51.707 "runtime": 1.326515, 00:18:51.707 "iops": 15862.617460036261, 00:18:51.707 "mibps": 1982.8271825045326, 00:18:51.707 "io_failed": 1, 00:18:51.707 "io_timeout": 0, 00:18:51.707 "avg_latency_us": 87.14046150399521, 00:18:51.707 "min_latency_us": 27.428571428571427, 00:18:51.707 "max_latency_us": 1427.7485714285715 00:18:51.707 } 00:18:51.707 ], 00:18:51.707 "core_count": 1 00:18:51.707 } 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61422 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 61422 ']' 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 61422 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:51.707 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 61422 00:18:51.965 killing process with pid 61422 00:18:51.965 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:51.965 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:51.965 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 61422' 00:18:51.965 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 61422 00:18:51.965 [2024-10-07 07:39:51.278602] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.965 07:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 61422 00:18:51.965 [2024-10-07 07:39:51.433970] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JvCDyPSTcP 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:18:53.343 00:18:53.343 real 0m4.712s 00:18:53.343 user 0m5.642s 00:18:53.343 sys 0m0.617s 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:53.343 ************************************ 00:18:53.343 END TEST raid_read_error_test 00:18:53.343 ************************************ 00:18:53.343 07:39:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.343 07:39:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:18:53.343 07:39:52 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:18:53.343 07:39:52 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:53.343 07:39:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.343 ************************************ 00:18:53.343 START TEST raid_write_error_test 00:18:53.343 ************************************ 00:18:53.343 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 2 write 00:18:53.343 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:18:53.343 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:18:53.343 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qkx79lLxiy 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61572 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61572 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 61572 ']' 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:53.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:53.602 07:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.602 [2024-10-07 07:39:53.032765] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:53.602 [2024-10-07 07:39:53.032956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61572 ] 00:18:53.861 [2024-10-07 07:39:53.217356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.120 [2024-10-07 07:39:53.442008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.120 [2024-10-07 07:39:53.660779] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.120 [2024-10-07 07:39:53.660822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.688 BaseBdev1_malloc 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.688 true 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.688 07:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.688 [2024-10-07 07:39:54.005517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:54.688 [2024-10-07 07:39:54.005578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.688 [2024-10-07 07:39:54.005598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:54.688 [2024-10-07 07:39:54.005613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.688 [2024-10-07 07:39:54.008115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.688 [2024-10-07 07:39:54.008166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:54.688 BaseBdev1 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.688 BaseBdev2_malloc 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.688 true 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.688 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.689 [2024-10-07 07:39:54.077108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:54.689 [2024-10-07 07:39:54.077173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.689 [2024-10-07 07:39:54.077194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:54.689 [2024-10-07 07:39:54.077209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.689 [2024-10-07 07:39:54.079619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.689 [2024-10-07 07:39:54.079668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:54.689 BaseBdev2 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.689 [2024-10-07 07:39:54.085191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.689 [2024-10-07 07:39:54.087322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.689 [2024-10-07 07:39:54.087520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:54.689 [2024-10-07 07:39:54.087544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:54.689 [2024-10-07 07:39:54.087855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:54.689 [2024-10-07 07:39:54.088039] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:54.689 [2024-10-07 07:39:54.088060] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:54.689 [2024-10-07 07:39:54.088256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.689 "name": "raid_bdev1", 00:18:54.689 "uuid": "f9214a24-bad6-44b4-b644-3c2d294a9b10", 00:18:54.689 "strip_size_kb": 64, 00:18:54.689 "state": "online", 00:18:54.689 "raid_level": "raid0", 00:18:54.689 "superblock": true, 00:18:54.689 "num_base_bdevs": 2, 00:18:54.689 "num_base_bdevs_discovered": 2, 00:18:54.689 "num_base_bdevs_operational": 2, 00:18:54.689 "base_bdevs_list": [ 00:18:54.689 { 00:18:54.689 "name": "BaseBdev1", 00:18:54.689 "uuid": "c1ca1cde-d344-5fbc-9ab9-436d6524cd07", 00:18:54.689 "is_configured": true, 00:18:54.689 "data_offset": 2048, 00:18:54.689 "data_size": 63488 00:18:54.689 }, 00:18:54.689 { 00:18:54.689 "name": "BaseBdev2", 00:18:54.689 "uuid": "66690ad7-a300-55c1-b0c9-b34293afa521", 00:18:54.689 "is_configured": true, 00:18:54.689 "data_offset": 2048, 00:18:54.689 "data_size": 63488 00:18:54.689 } 00:18:54.689 ] 00:18:54.689 }' 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.689 07:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.257 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:55.257 07:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:55.257 [2024-10-07 07:39:54.610727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.194 "name": "raid_bdev1", 00:18:56.194 "uuid": "f9214a24-bad6-44b4-b644-3c2d294a9b10", 00:18:56.194 "strip_size_kb": 64, 00:18:56.194 "state": "online", 00:18:56.194 "raid_level": "raid0", 00:18:56.194 "superblock": true, 00:18:56.194 "num_base_bdevs": 2, 00:18:56.194 "num_base_bdevs_discovered": 2, 00:18:56.194 "num_base_bdevs_operational": 2, 00:18:56.194 "base_bdevs_list": [ 00:18:56.194 { 00:18:56.194 "name": "BaseBdev1", 00:18:56.194 "uuid": "c1ca1cde-d344-5fbc-9ab9-436d6524cd07", 00:18:56.194 "is_configured": true, 00:18:56.194 "data_offset": 2048, 00:18:56.194 "data_size": 63488 00:18:56.194 }, 00:18:56.194 { 00:18:56.194 "name": "BaseBdev2", 00:18:56.194 "uuid": "66690ad7-a300-55c1-b0c9-b34293afa521", 00:18:56.194 "is_configured": true, 00:18:56.194 "data_offset": 2048, 00:18:56.194 "data_size": 63488 00:18:56.194 } 00:18:56.194 ] 00:18:56.194 }' 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.194 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.453 [2024-10-07 07:39:55.981765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.453 [2024-10-07 07:39:55.981951] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.453 [2024-10-07 07:39:55.984999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.453 [2024-10-07 07:39:55.985045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.453 [2024-10-07 07:39:55.985079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.453 [2024-10-07 07:39:55.985094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:56.453 { 00:18:56.453 "results": [ 00:18:56.453 { 00:18:56.453 "job": "raid_bdev1", 00:18:56.453 "core_mask": "0x1", 00:18:56.453 "workload": "randrw", 00:18:56.453 "percentage": 50, 00:18:56.453 "status": "finished", 00:18:56.453 "queue_depth": 1, 00:18:56.453 "io_size": 131072, 00:18:56.453 "runtime": 1.369136, 00:18:56.453 "iops": 15083.965362096973, 00:18:56.453 "mibps": 1885.4956702621216, 00:18:56.453 "io_failed": 1, 00:18:56.453 "io_timeout": 0, 00:18:56.453 "avg_latency_us": 91.67637991021712, 00:18:56.453 "min_latency_us": 27.55047619047619, 00:18:56.453 "max_latency_us": 1435.5504761904763 00:18:56.453 } 00:18:56.453 ], 00:18:56.453 "core_count": 1 00:18:56.453 } 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61572 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 61572 ']' 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 61572 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:18:56.453 07:39:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 61572 00:18:56.712 killing process with pid 61572 00:18:56.712 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:18:56.712 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:18:56.712 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 61572' 00:18:56.712 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 61572 00:18:56.712 [2024-10-07 07:39:56.030538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.712 07:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 61572 00:18:56.712 [2024-10-07 07:39:56.177721] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qkx79lLxiy 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:18:58.089 00:18:58.089 real 0m4.732s 00:18:58.089 user 0m5.607s 00:18:58.089 sys 0m0.612s 00:18:58.089 ************************************ 00:18:58.089 END TEST raid_write_error_test 00:18:58.089 ************************************ 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:18:58.089 07:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.349 07:39:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:58.349 07:39:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:18:58.349 07:39:57 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:18:58.349 07:39:57 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:18:58.349 07:39:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.349 ************************************ 00:18:58.349 START TEST raid_state_function_test 00:18:58.349 ************************************ 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 2 false 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:58.349 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:58.350 Process raid pid: 61717 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61717 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61717' 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61717 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 61717 ']' 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:18:58.350 07:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 [2024-10-07 07:39:57.818227] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:18:58.350 [2024-10-07 07:39:57.818645] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.627 [2024-10-07 07:39:58.013676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.915 [2024-10-07 07:39:58.325308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.173 [2024-10-07 07:39:58.554629] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.173 [2024-10-07 07:39:58.554679] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.431 [2024-10-07 07:39:58.799346] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.431 [2024-10-07 07:39:58.799607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.431 [2024-10-07 07:39:58.799741] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.431 [2024-10-07 07:39:58.799774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:18:59.431 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.431 "name": "Existed_Raid", 00:18:59.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.431 "strip_size_kb": 64, 00:18:59.431 "state": "configuring", 00:18:59.431 "raid_level": "concat", 00:18:59.431 "superblock": false, 00:18:59.431 "num_base_bdevs": 2, 00:18:59.431 "num_base_bdevs_discovered": 0, 00:18:59.431 "num_base_bdevs_operational": 2, 00:18:59.431 "base_bdevs_list": [ 00:18:59.431 { 00:18:59.431 "name": "BaseBdev1", 00:18:59.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.431 "is_configured": false, 00:18:59.431 "data_offset": 0, 00:18:59.431 "data_size": 0 00:18:59.432 }, 00:18:59.432 { 00:18:59.432 "name": "BaseBdev2", 00:18:59.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.432 "is_configured": false, 00:18:59.432 "data_offset": 0, 00:18:59.432 "data_size": 0 00:18:59.432 } 00:18:59.432 ] 00:18:59.432 }' 00:18:59.432 07:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.432 07:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 [2024-10-07 07:39:59.271355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.000 [2024-10-07 07:39:59.271553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 [2024-10-07 07:39:59.279374] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.000 [2024-10-07 07:39:59.279551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.000 [2024-10-07 07:39:59.279657] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.000 [2024-10-07 07:39:59.279725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 [2024-10-07 07:39:59.359776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.000 BaseBdev1 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.000 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.000 [ 00:19:00.000 { 00:19:00.000 "name": "BaseBdev1", 00:19:00.000 "aliases": [ 00:19:00.000 "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d" 00:19:00.000 ], 00:19:00.000 "product_name": "Malloc disk", 00:19:00.000 "block_size": 512, 00:19:00.000 "num_blocks": 65536, 00:19:00.000 "uuid": "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d", 00:19:00.000 "assigned_rate_limits": { 00:19:00.000 "rw_ios_per_sec": 0, 00:19:00.000 "rw_mbytes_per_sec": 0, 00:19:00.000 "r_mbytes_per_sec": 0, 00:19:00.000 "w_mbytes_per_sec": 0 00:19:00.000 }, 00:19:00.000 "claimed": true, 00:19:00.000 "claim_type": "exclusive_write", 00:19:00.000 "zoned": false, 00:19:00.000 "supported_io_types": { 00:19:00.000 "read": true, 00:19:00.000 "write": true, 00:19:00.000 "unmap": true, 00:19:00.000 "flush": true, 00:19:00.000 "reset": true, 00:19:00.000 "nvme_admin": false, 00:19:00.000 "nvme_io": false, 00:19:00.000 "nvme_io_md": false, 00:19:00.000 "write_zeroes": true, 00:19:00.000 "zcopy": true, 00:19:00.000 "get_zone_info": false, 00:19:00.000 "zone_management": false, 00:19:00.000 "zone_append": false, 00:19:00.000 "compare": false, 00:19:00.000 "compare_and_write": false, 00:19:00.000 "abort": true, 00:19:00.000 "seek_hole": false, 00:19:00.000 "seek_data": false, 00:19:00.000 "copy": true, 00:19:00.001 "nvme_iov_md": false 00:19:00.001 }, 00:19:00.001 "memory_domains": [ 00:19:00.001 { 00:19:00.001 "dma_device_id": "system", 00:19:00.001 "dma_device_type": 1 00:19:00.001 }, 00:19:00.001 { 00:19:00.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.001 "dma_device_type": 2 00:19:00.001 } 00:19:00.001 ], 00:19:00.001 "driver_specific": {} 00:19:00.001 } 00:19:00.001 ] 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.001 "name": "Existed_Raid", 00:19:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.001 "strip_size_kb": 64, 00:19:00.001 "state": "configuring", 00:19:00.001 "raid_level": "concat", 00:19:00.001 "superblock": false, 00:19:00.001 "num_base_bdevs": 2, 00:19:00.001 "num_base_bdevs_discovered": 1, 00:19:00.001 "num_base_bdevs_operational": 2, 00:19:00.001 "base_bdevs_list": [ 00:19:00.001 { 00:19:00.001 "name": "BaseBdev1", 00:19:00.001 "uuid": "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d", 00:19:00.001 "is_configured": true, 00:19:00.001 "data_offset": 0, 00:19:00.001 "data_size": 65536 00:19:00.001 }, 00:19:00.001 { 00:19:00.001 "name": "BaseBdev2", 00:19:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.001 "is_configured": false, 00:19:00.001 "data_offset": 0, 00:19:00.001 "data_size": 0 00:19:00.001 } 00:19:00.001 ] 00:19:00.001 }' 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.001 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.259 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.259 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.259 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.259 [2024-10-07 07:39:59.816401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.259 [2024-10-07 07:39:59.816672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.518 [2024-10-07 07:39:59.824396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.518 [2024-10-07 07:39:59.826832] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.518 [2024-10-07 07:39:59.826999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.518 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.518 "name": "Existed_Raid", 00:19:00.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.518 "strip_size_kb": 64, 00:19:00.518 "state": "configuring", 00:19:00.518 "raid_level": "concat", 00:19:00.518 "superblock": false, 00:19:00.518 "num_base_bdevs": 2, 00:19:00.518 "num_base_bdevs_discovered": 1, 00:19:00.518 "num_base_bdevs_operational": 2, 00:19:00.518 "base_bdevs_list": [ 00:19:00.518 { 00:19:00.518 "name": "BaseBdev1", 00:19:00.518 "uuid": "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d", 00:19:00.518 "is_configured": true, 00:19:00.518 "data_offset": 0, 00:19:00.518 "data_size": 65536 00:19:00.518 }, 00:19:00.518 { 00:19:00.518 "name": "BaseBdev2", 00:19:00.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.519 "is_configured": false, 00:19:00.519 "data_offset": 0, 00:19:00.519 "data_size": 0 00:19:00.519 } 00:19:00.519 ] 00:19:00.519 }' 00:19:00.519 07:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.519 07:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.776 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.776 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.776 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.776 [2024-10-07 07:40:00.317642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.776 BaseBdev2 00:19:00.776 [2024-10-07 07:40:00.318030] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.777 [2024-10-07 07:40:00.318057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:00.777 [2024-10-07 07:40:00.318427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:00.777 [2024-10-07 07:40:00.318625] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.777 [2024-10-07 07:40:00.318646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:00.777 [2024-10-07 07:40:00.319006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:00.777 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.035 [ 00:19:01.035 { 00:19:01.035 "name": "BaseBdev2", 00:19:01.035 "aliases": [ 00:19:01.035 "739ed8aa-a504-48d8-81d2-68c9163e5bdf" 00:19:01.035 ], 00:19:01.035 "product_name": "Malloc disk", 00:19:01.035 "block_size": 512, 00:19:01.035 "num_blocks": 65536, 00:19:01.035 "uuid": "739ed8aa-a504-48d8-81d2-68c9163e5bdf", 00:19:01.035 "assigned_rate_limits": { 00:19:01.035 "rw_ios_per_sec": 0, 00:19:01.035 "rw_mbytes_per_sec": 0, 00:19:01.035 "r_mbytes_per_sec": 0, 00:19:01.035 "w_mbytes_per_sec": 0 00:19:01.035 }, 00:19:01.035 "claimed": true, 00:19:01.035 "claim_type": "exclusive_write", 00:19:01.035 "zoned": false, 00:19:01.035 "supported_io_types": { 00:19:01.035 "read": true, 00:19:01.035 "write": true, 00:19:01.035 "unmap": true, 00:19:01.035 "flush": true, 00:19:01.035 "reset": true, 00:19:01.035 "nvme_admin": false, 00:19:01.035 "nvme_io": false, 00:19:01.035 "nvme_io_md": false, 00:19:01.035 "write_zeroes": true, 00:19:01.035 "zcopy": true, 00:19:01.035 "get_zone_info": false, 00:19:01.035 "zone_management": false, 00:19:01.035 "zone_append": false, 00:19:01.035 "compare": false, 00:19:01.035 "compare_and_write": false, 00:19:01.035 "abort": true, 00:19:01.035 "seek_hole": false, 00:19:01.035 "seek_data": false, 00:19:01.035 "copy": true, 00:19:01.035 "nvme_iov_md": false 00:19:01.035 }, 00:19:01.035 "memory_domains": [ 00:19:01.035 { 00:19:01.035 "dma_device_id": "system", 00:19:01.035 "dma_device_type": 1 00:19:01.035 }, 00:19:01.035 { 00:19:01.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.035 "dma_device_type": 2 00:19:01.035 } 00:19:01.035 ], 00:19:01.035 "driver_specific": {} 00:19:01.035 } 00:19:01.035 ] 00:19:01.035 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.036 "name": "Existed_Raid", 00:19:01.036 "uuid": "794c0526-e644-4d49-9876-c4c07e866987", 00:19:01.036 "strip_size_kb": 64, 00:19:01.036 "state": "online", 00:19:01.036 "raid_level": "concat", 00:19:01.036 "superblock": false, 00:19:01.036 "num_base_bdevs": 2, 00:19:01.036 "num_base_bdevs_discovered": 2, 00:19:01.036 "num_base_bdevs_operational": 2, 00:19:01.036 "base_bdevs_list": [ 00:19:01.036 { 00:19:01.036 "name": "BaseBdev1", 00:19:01.036 "uuid": "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d", 00:19:01.036 "is_configured": true, 00:19:01.036 "data_offset": 0, 00:19:01.036 "data_size": 65536 00:19:01.036 }, 00:19:01.036 { 00:19:01.036 "name": "BaseBdev2", 00:19:01.036 "uuid": "739ed8aa-a504-48d8-81d2-68c9163e5bdf", 00:19:01.036 "is_configured": true, 00:19:01.036 "data_offset": 0, 00:19:01.036 "data_size": 65536 00:19:01.036 } 00:19:01.036 ] 00:19:01.036 }' 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.036 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.294 [2024-10-07 07:40:00.806150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.294 "name": "Existed_Raid", 00:19:01.294 "aliases": [ 00:19:01.294 "794c0526-e644-4d49-9876-c4c07e866987" 00:19:01.294 ], 00:19:01.294 "product_name": "Raid Volume", 00:19:01.294 "block_size": 512, 00:19:01.294 "num_blocks": 131072, 00:19:01.294 "uuid": "794c0526-e644-4d49-9876-c4c07e866987", 00:19:01.294 "assigned_rate_limits": { 00:19:01.294 "rw_ios_per_sec": 0, 00:19:01.294 "rw_mbytes_per_sec": 0, 00:19:01.294 "r_mbytes_per_sec": 0, 00:19:01.294 "w_mbytes_per_sec": 0 00:19:01.294 }, 00:19:01.294 "claimed": false, 00:19:01.294 "zoned": false, 00:19:01.294 "supported_io_types": { 00:19:01.294 "read": true, 00:19:01.294 "write": true, 00:19:01.294 "unmap": true, 00:19:01.294 "flush": true, 00:19:01.294 "reset": true, 00:19:01.294 "nvme_admin": false, 00:19:01.294 "nvme_io": false, 00:19:01.294 "nvme_io_md": false, 00:19:01.294 "write_zeroes": true, 00:19:01.294 "zcopy": false, 00:19:01.294 "get_zone_info": false, 00:19:01.294 "zone_management": false, 00:19:01.294 "zone_append": false, 00:19:01.294 "compare": false, 00:19:01.294 "compare_and_write": false, 00:19:01.294 "abort": false, 00:19:01.294 "seek_hole": false, 00:19:01.294 "seek_data": false, 00:19:01.294 "copy": false, 00:19:01.294 "nvme_iov_md": false 00:19:01.294 }, 00:19:01.294 "memory_domains": [ 00:19:01.294 { 00:19:01.294 "dma_device_id": "system", 00:19:01.294 "dma_device_type": 1 00:19:01.294 }, 00:19:01.294 { 00:19:01.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.294 "dma_device_type": 2 00:19:01.294 }, 00:19:01.294 { 00:19:01.294 "dma_device_id": "system", 00:19:01.294 "dma_device_type": 1 00:19:01.294 }, 00:19:01.294 { 00:19:01.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.294 "dma_device_type": 2 00:19:01.294 } 00:19:01.294 ], 00:19:01.294 "driver_specific": { 00:19:01.294 "raid": { 00:19:01.294 "uuid": "794c0526-e644-4d49-9876-c4c07e866987", 00:19:01.294 "strip_size_kb": 64, 00:19:01.294 "state": "online", 00:19:01.294 "raid_level": "concat", 00:19:01.294 "superblock": false, 00:19:01.294 "num_base_bdevs": 2, 00:19:01.294 "num_base_bdevs_discovered": 2, 00:19:01.294 "num_base_bdevs_operational": 2, 00:19:01.294 "base_bdevs_list": [ 00:19:01.294 { 00:19:01.294 "name": "BaseBdev1", 00:19:01.294 "uuid": "2d7e4811-244f-4ae7-b5ff-94cbfcbbb46d", 00:19:01.294 "is_configured": true, 00:19:01.294 "data_offset": 0, 00:19:01.294 "data_size": 65536 00:19:01.294 }, 00:19:01.294 { 00:19:01.294 "name": "BaseBdev2", 00:19:01.294 "uuid": "739ed8aa-a504-48d8-81d2-68c9163e5bdf", 00:19:01.294 "is_configured": true, 00:19:01.294 "data_offset": 0, 00:19:01.294 "data_size": 65536 00:19:01.294 } 00:19:01.294 ] 00:19:01.294 } 00:19:01.294 } 00:19:01.294 }' 00:19:01.294 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:01.552 BaseBdev2' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.552 07:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.552 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:01.552 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:01.552 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.552 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.552 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.552 [2024-10-07 07:40:01.025923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.552 [2024-10-07 07:40:01.026082] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.552 [2024-10-07 07:40:01.026261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:01.809 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.810 "name": "Existed_Raid", 00:19:01.810 "uuid": "794c0526-e644-4d49-9876-c4c07e866987", 00:19:01.810 "strip_size_kb": 64, 00:19:01.810 "state": "offline", 00:19:01.810 "raid_level": "concat", 00:19:01.810 "superblock": false, 00:19:01.810 "num_base_bdevs": 2, 00:19:01.810 "num_base_bdevs_discovered": 1, 00:19:01.810 "num_base_bdevs_operational": 1, 00:19:01.810 "base_bdevs_list": [ 00:19:01.810 { 00:19:01.810 "name": null, 00:19:01.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.810 "is_configured": false, 00:19:01.810 "data_offset": 0, 00:19:01.810 "data_size": 65536 00:19:01.810 }, 00:19:01.810 { 00:19:01.810 "name": "BaseBdev2", 00:19:01.810 "uuid": "739ed8aa-a504-48d8-81d2-68c9163e5bdf", 00:19:01.810 "is_configured": true, 00:19:01.810 "data_offset": 0, 00:19:01.810 "data_size": 65536 00:19:01.810 } 00:19:01.810 ] 00:19:01.810 }' 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.810 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.067 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.325 [2024-10-07 07:40:01.642687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:02.325 [2024-10-07 07:40:01.642899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61717 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 61717 ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 61717 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 61717 00:19:02.325 killing process with pid 61717 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 61717' 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 61717 00:19:02.325 [2024-10-07 07:40:01.834188] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.325 07:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 61717 00:19:02.325 [2024-10-07 07:40:01.851407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.699 ************************************ 00:19:03.699 END TEST raid_state_function_test 00:19:03.699 ************************************ 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:03.699 00:19:03.699 real 0m5.486s 00:19:03.699 user 0m7.759s 00:19:03.699 sys 0m1.015s 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.699 07:40:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:19:03.699 07:40:03 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:03.699 07:40:03 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:03.699 07:40:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.699 ************************************ 00:19:03.699 START TEST raid_state_function_test_sb 00:19:03.699 ************************************ 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 2 true 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.699 Process raid pid: 61970 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61970 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61970' 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61970 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 61970 ']' 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:03.699 07:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.957 [2024-10-07 07:40:03.360135] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:03.957 [2024-10-07 07:40:03.360546] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.215 [2024-10-07 07:40:03.548199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.473 [2024-10-07 07:40:03.786446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.473 [2024-10-07 07:40:04.016498] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.473 [2024-10-07 07:40:04.016779] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.039 [2024-10-07 07:40:04.444502] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.039 [2024-10-07 07:40:04.444716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.039 [2024-10-07 07:40:04.444740] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.039 [2024-10-07 07:40:04.444758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.039 "name": "Existed_Raid", 00:19:05.039 "uuid": "18ebf36a-ccff-48b1-a8e2-4af20e1bc9a0", 00:19:05.039 "strip_size_kb": 64, 00:19:05.039 "state": "configuring", 00:19:05.039 "raid_level": "concat", 00:19:05.039 "superblock": true, 00:19:05.039 "num_base_bdevs": 2, 00:19:05.039 "num_base_bdevs_discovered": 0, 00:19:05.039 "num_base_bdevs_operational": 2, 00:19:05.039 "base_bdevs_list": [ 00:19:05.039 { 00:19:05.039 "name": "BaseBdev1", 00:19:05.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.039 "is_configured": false, 00:19:05.039 "data_offset": 0, 00:19:05.039 "data_size": 0 00:19:05.039 }, 00:19:05.039 { 00:19:05.039 "name": "BaseBdev2", 00:19:05.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.039 "is_configured": false, 00:19:05.039 "data_offset": 0, 00:19:05.039 "data_size": 0 00:19:05.039 } 00:19:05.039 ] 00:19:05.039 }' 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.039 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 [2024-10-07 07:40:04.872525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.643 [2024-10-07 07:40:04.872577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 [2024-10-07 07:40:04.880587] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.643 [2024-10-07 07:40:04.880830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.643 [2024-10-07 07:40:04.880965] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.643 [2024-10-07 07:40:04.881112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 [2024-10-07 07:40:04.939995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.643 BaseBdev1 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 [ 00:19:05.643 { 00:19:05.643 "name": "BaseBdev1", 00:19:05.643 "aliases": [ 00:19:05.643 "bd8b6204-0e44-418d-9459-b55663bfb19d" 00:19:05.643 ], 00:19:05.643 "product_name": "Malloc disk", 00:19:05.643 "block_size": 512, 00:19:05.643 "num_blocks": 65536, 00:19:05.643 "uuid": "bd8b6204-0e44-418d-9459-b55663bfb19d", 00:19:05.643 "assigned_rate_limits": { 00:19:05.643 "rw_ios_per_sec": 0, 00:19:05.643 "rw_mbytes_per_sec": 0, 00:19:05.643 "r_mbytes_per_sec": 0, 00:19:05.643 "w_mbytes_per_sec": 0 00:19:05.643 }, 00:19:05.643 "claimed": true, 00:19:05.643 "claim_type": "exclusive_write", 00:19:05.643 "zoned": false, 00:19:05.643 "supported_io_types": { 00:19:05.643 "read": true, 00:19:05.643 "write": true, 00:19:05.643 "unmap": true, 00:19:05.643 "flush": true, 00:19:05.643 "reset": true, 00:19:05.643 "nvme_admin": false, 00:19:05.643 "nvme_io": false, 00:19:05.643 "nvme_io_md": false, 00:19:05.643 "write_zeroes": true, 00:19:05.643 "zcopy": true, 00:19:05.643 "get_zone_info": false, 00:19:05.643 "zone_management": false, 00:19:05.643 "zone_append": false, 00:19:05.643 "compare": false, 00:19:05.643 "compare_and_write": false, 00:19:05.643 "abort": true, 00:19:05.643 "seek_hole": false, 00:19:05.643 "seek_data": false, 00:19:05.643 "copy": true, 00:19:05.643 "nvme_iov_md": false 00:19:05.643 }, 00:19:05.643 "memory_domains": [ 00:19:05.643 { 00:19:05.643 "dma_device_id": "system", 00:19:05.643 "dma_device_type": 1 00:19:05.643 }, 00:19:05.643 { 00:19:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.643 "dma_device_type": 2 00:19:05.643 } 00:19:05.643 ], 00:19:05.643 "driver_specific": {} 00:19:05.643 } 00:19:05.643 ] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.643 07:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.643 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.643 "name": "Existed_Raid", 00:19:05.643 "uuid": "3a7cdfc8-8a3d-463b-b34a-850cb2675513", 00:19:05.643 "strip_size_kb": 64, 00:19:05.643 "state": "configuring", 00:19:05.643 "raid_level": "concat", 00:19:05.643 "superblock": true, 00:19:05.643 "num_base_bdevs": 2, 00:19:05.643 "num_base_bdevs_discovered": 1, 00:19:05.643 "num_base_bdevs_operational": 2, 00:19:05.643 "base_bdevs_list": [ 00:19:05.643 { 00:19:05.643 "name": "BaseBdev1", 00:19:05.643 "uuid": "bd8b6204-0e44-418d-9459-b55663bfb19d", 00:19:05.643 "is_configured": true, 00:19:05.643 "data_offset": 2048, 00:19:05.643 "data_size": 63488 00:19:05.643 }, 00:19:05.643 { 00:19:05.643 "name": "BaseBdev2", 00:19:05.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.643 "is_configured": false, 00:19:05.643 "data_offset": 0, 00:19:05.643 "data_size": 0 00:19:05.643 } 00:19:05.643 ] 00:19:05.643 }' 00:19:05.644 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.644 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.902 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.902 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.903 [2024-10-07 07:40:05.404208] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.903 [2024-10-07 07:40:05.404272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.903 [2024-10-07 07:40:05.412245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.903 [2024-10-07 07:40:05.414520] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.903 [2024-10-07 07:40:05.414701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.903 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.162 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.162 "name": "Existed_Raid", 00:19:06.162 "uuid": "5178955b-3f25-4c82-af02-abc5947b4e9f", 00:19:06.162 "strip_size_kb": 64, 00:19:06.162 "state": "configuring", 00:19:06.162 "raid_level": "concat", 00:19:06.162 "superblock": true, 00:19:06.162 "num_base_bdevs": 2, 00:19:06.162 "num_base_bdevs_discovered": 1, 00:19:06.162 "num_base_bdevs_operational": 2, 00:19:06.162 "base_bdevs_list": [ 00:19:06.162 { 00:19:06.162 "name": "BaseBdev1", 00:19:06.162 "uuid": "bd8b6204-0e44-418d-9459-b55663bfb19d", 00:19:06.162 "is_configured": true, 00:19:06.162 "data_offset": 2048, 00:19:06.162 "data_size": 63488 00:19:06.162 }, 00:19:06.162 { 00:19:06.162 "name": "BaseBdev2", 00:19:06.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.162 "is_configured": false, 00:19:06.162 "data_offset": 0, 00:19:06.162 "data_size": 0 00:19:06.162 } 00:19:06.162 ] 00:19:06.162 }' 00:19:06.162 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.162 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.420 [2024-10-07 07:40:05.918598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.420 BaseBdev2 00:19:06.420 [2024-10-07 07:40:05.919146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:06.420 [2024-10-07 07:40:05.919176] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:06.420 [2024-10-07 07:40:05.919489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:06.420 [2024-10-07 07:40:05.919637] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:06.420 [2024-10-07 07:40:05.919654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:06.420 [2024-10-07 07:40:05.919837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.420 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.420 [ 00:19:06.420 { 00:19:06.420 "name": "BaseBdev2", 00:19:06.420 "aliases": [ 00:19:06.420 "23bedf50-cd26-4cfe-a115-3fac3fe4950a" 00:19:06.420 ], 00:19:06.420 "product_name": "Malloc disk", 00:19:06.420 "block_size": 512, 00:19:06.420 "num_blocks": 65536, 00:19:06.420 "uuid": "23bedf50-cd26-4cfe-a115-3fac3fe4950a", 00:19:06.420 "assigned_rate_limits": { 00:19:06.420 "rw_ios_per_sec": 0, 00:19:06.420 "rw_mbytes_per_sec": 0, 00:19:06.420 "r_mbytes_per_sec": 0, 00:19:06.420 "w_mbytes_per_sec": 0 00:19:06.420 }, 00:19:06.420 "claimed": true, 00:19:06.420 "claim_type": "exclusive_write", 00:19:06.420 "zoned": false, 00:19:06.420 "supported_io_types": { 00:19:06.420 "read": true, 00:19:06.420 "write": true, 00:19:06.420 "unmap": true, 00:19:06.420 "flush": true, 00:19:06.420 "reset": true, 00:19:06.420 "nvme_admin": false, 00:19:06.420 "nvme_io": false, 00:19:06.420 "nvme_io_md": false, 00:19:06.420 "write_zeroes": true, 00:19:06.420 "zcopy": true, 00:19:06.420 "get_zone_info": false, 00:19:06.420 "zone_management": false, 00:19:06.420 "zone_append": false, 00:19:06.420 "compare": false, 00:19:06.420 "compare_and_write": false, 00:19:06.420 "abort": true, 00:19:06.420 "seek_hole": false, 00:19:06.420 "seek_data": false, 00:19:06.420 "copy": true, 00:19:06.420 "nvme_iov_md": false 00:19:06.420 }, 00:19:06.420 "memory_domains": [ 00:19:06.420 { 00:19:06.420 "dma_device_id": "system", 00:19:06.420 "dma_device_type": 1 00:19:06.420 }, 00:19:06.420 { 00:19:06.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.420 "dma_device_type": 2 00:19:06.420 } 00:19:06.420 ], 00:19:06.421 "driver_specific": {} 00:19:06.421 } 00:19:06.421 ] 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.421 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.679 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.679 "name": "Existed_Raid", 00:19:06.679 "uuid": "5178955b-3f25-4c82-af02-abc5947b4e9f", 00:19:06.679 "strip_size_kb": 64, 00:19:06.679 "state": "online", 00:19:06.679 "raid_level": "concat", 00:19:06.679 "superblock": true, 00:19:06.679 "num_base_bdevs": 2, 00:19:06.679 "num_base_bdevs_discovered": 2, 00:19:06.679 "num_base_bdevs_operational": 2, 00:19:06.679 "base_bdevs_list": [ 00:19:06.679 { 00:19:06.679 "name": "BaseBdev1", 00:19:06.679 "uuid": "bd8b6204-0e44-418d-9459-b55663bfb19d", 00:19:06.679 "is_configured": true, 00:19:06.679 "data_offset": 2048, 00:19:06.679 "data_size": 63488 00:19:06.679 }, 00:19:06.679 { 00:19:06.679 "name": "BaseBdev2", 00:19:06.679 "uuid": "23bedf50-cd26-4cfe-a115-3fac3fe4950a", 00:19:06.679 "is_configured": true, 00:19:06.679 "data_offset": 2048, 00:19:06.679 "data_size": 63488 00:19:06.679 } 00:19:06.679 ] 00:19:06.679 }' 00:19:06.679 07:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.679 07:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.938 [2024-10-07 07:40:06.419114] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:06.938 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.938 "name": "Existed_Raid", 00:19:06.938 "aliases": [ 00:19:06.938 "5178955b-3f25-4c82-af02-abc5947b4e9f" 00:19:06.938 ], 00:19:06.938 "product_name": "Raid Volume", 00:19:06.938 "block_size": 512, 00:19:06.938 "num_blocks": 126976, 00:19:06.938 "uuid": "5178955b-3f25-4c82-af02-abc5947b4e9f", 00:19:06.938 "assigned_rate_limits": { 00:19:06.938 "rw_ios_per_sec": 0, 00:19:06.938 "rw_mbytes_per_sec": 0, 00:19:06.938 "r_mbytes_per_sec": 0, 00:19:06.938 "w_mbytes_per_sec": 0 00:19:06.938 }, 00:19:06.938 "claimed": false, 00:19:06.939 "zoned": false, 00:19:06.939 "supported_io_types": { 00:19:06.939 "read": true, 00:19:06.939 "write": true, 00:19:06.939 "unmap": true, 00:19:06.939 "flush": true, 00:19:06.939 "reset": true, 00:19:06.939 "nvme_admin": false, 00:19:06.939 "nvme_io": false, 00:19:06.939 "nvme_io_md": false, 00:19:06.939 "write_zeroes": true, 00:19:06.939 "zcopy": false, 00:19:06.939 "get_zone_info": false, 00:19:06.939 "zone_management": false, 00:19:06.939 "zone_append": false, 00:19:06.939 "compare": false, 00:19:06.939 "compare_and_write": false, 00:19:06.939 "abort": false, 00:19:06.939 "seek_hole": false, 00:19:06.939 "seek_data": false, 00:19:06.939 "copy": false, 00:19:06.939 "nvme_iov_md": false 00:19:06.939 }, 00:19:06.939 "memory_domains": [ 00:19:06.939 { 00:19:06.939 "dma_device_id": "system", 00:19:06.939 "dma_device_type": 1 00:19:06.939 }, 00:19:06.939 { 00:19:06.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.939 "dma_device_type": 2 00:19:06.939 }, 00:19:06.939 { 00:19:06.939 "dma_device_id": "system", 00:19:06.939 "dma_device_type": 1 00:19:06.939 }, 00:19:06.939 { 00:19:06.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.939 "dma_device_type": 2 00:19:06.939 } 00:19:06.939 ], 00:19:06.939 "driver_specific": { 00:19:06.939 "raid": { 00:19:06.939 "uuid": "5178955b-3f25-4c82-af02-abc5947b4e9f", 00:19:06.939 "strip_size_kb": 64, 00:19:06.939 "state": "online", 00:19:06.939 "raid_level": "concat", 00:19:06.939 "superblock": true, 00:19:06.939 "num_base_bdevs": 2, 00:19:06.939 "num_base_bdevs_discovered": 2, 00:19:06.939 "num_base_bdevs_operational": 2, 00:19:06.939 "base_bdevs_list": [ 00:19:06.939 { 00:19:06.939 "name": "BaseBdev1", 00:19:06.939 "uuid": "bd8b6204-0e44-418d-9459-b55663bfb19d", 00:19:06.939 "is_configured": true, 00:19:06.939 "data_offset": 2048, 00:19:06.939 "data_size": 63488 00:19:06.939 }, 00:19:06.939 { 00:19:06.939 "name": "BaseBdev2", 00:19:06.939 "uuid": "23bedf50-cd26-4cfe-a115-3fac3fe4950a", 00:19:06.939 "is_configured": true, 00:19:06.939 "data_offset": 2048, 00:19:06.939 "data_size": 63488 00:19:06.939 } 00:19:06.939 ] 00:19:06.939 } 00:19:06.939 } 00:19:06.939 }' 00:19:06.939 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:07.199 BaseBdev2' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.199 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.199 [2024-10-07 07:40:06.662936] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.199 [2024-10-07 07:40:06.663144] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.199 [2024-10-07 07:40:06.663328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.458 "name": "Existed_Raid", 00:19:07.458 "uuid": "5178955b-3f25-4c82-af02-abc5947b4e9f", 00:19:07.458 "strip_size_kb": 64, 00:19:07.458 "state": "offline", 00:19:07.458 "raid_level": "concat", 00:19:07.458 "superblock": true, 00:19:07.458 "num_base_bdevs": 2, 00:19:07.458 "num_base_bdevs_discovered": 1, 00:19:07.458 "num_base_bdevs_operational": 1, 00:19:07.458 "base_bdevs_list": [ 00:19:07.458 { 00:19:07.458 "name": null, 00:19:07.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.458 "is_configured": false, 00:19:07.458 "data_offset": 0, 00:19:07.458 "data_size": 63488 00:19:07.458 }, 00:19:07.458 { 00:19:07.458 "name": "BaseBdev2", 00:19:07.458 "uuid": "23bedf50-cd26-4cfe-a115-3fac3fe4950a", 00:19:07.458 "is_configured": true, 00:19:07.458 "data_offset": 2048, 00:19:07.458 "data_size": 63488 00:19:07.458 } 00:19:07.458 ] 00:19:07.458 }' 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.458 07:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.717 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.977 [2024-10-07 07:40:07.277479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.977 [2024-10-07 07:40:07.277689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61970 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 61970 ']' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 61970 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 61970 00:19:07.977 killing process with pid 61970 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 61970' 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 61970 00:19:07.977 [2024-10-07 07:40:07.460945] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.977 07:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 61970 00:19:07.977 [2024-10-07 07:40:07.479348] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.361 07:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:09.361 00:19:09.361 real 0m5.652s 00:19:09.361 user 0m8.084s 00:19:09.361 sys 0m0.904s 00:19:09.361 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:09.361 07:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 ************************************ 00:19:09.361 END TEST raid_state_function_test_sb 00:19:09.361 ************************************ 00:19:09.619 07:40:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:19:09.619 07:40:08 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:19:09.619 07:40:08 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:09.619 07:40:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.619 ************************************ 00:19:09.619 START TEST raid_superblock_test 00:19:09.619 ************************************ 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test concat 2 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62232 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62232 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 62232 ']' 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:09.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:09.619 07:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.619 [2024-10-07 07:40:09.046943] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:09.619 [2024-10-07 07:40:09.047264] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:19:09.878 [2024-10-07 07:40:09.208835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.878 [2024-10-07 07:40:09.428582] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.136 [2024-10-07 07:40:09.645982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.136 [2024-10-07 07:40:09.646048] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 malloc1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 [2024-10-07 07:40:10.058691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:10.704 [2024-10-07 07:40:10.058944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.704 [2024-10-07 07:40:10.059079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:10.704 [2024-10-07 07:40:10.059179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.704 [2024-10-07 07:40:10.062076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.704 [2024-10-07 07:40:10.062243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:10.704 pt1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 malloc2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 [2024-10-07 07:40:10.132202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:10.704 [2024-10-07 07:40:10.132277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.704 [2024-10-07 07:40:10.132306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:10.704 [2024-10-07 07:40:10.132318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.704 [2024-10-07 07:40:10.135122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.704 [2024-10-07 07:40:10.135166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:10.704 pt2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 [2024-10-07 07:40:10.140315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.704 [2024-10-07 07:40:10.143060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.704 [2024-10-07 07:40:10.143469] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:10.704 [2024-10-07 07:40:10.143584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:10.704 [2024-10-07 07:40:10.143951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:10.704 [2024-10-07 07:40:10.144162] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:10.704 [2024-10-07 07:40:10.144208] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:10.704 [2024-10-07 07:40:10.144592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.704 "name": "raid_bdev1", 00:19:10.704 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:10.704 "strip_size_kb": 64, 00:19:10.704 "state": "online", 00:19:10.704 "raid_level": "concat", 00:19:10.704 "superblock": true, 00:19:10.704 "num_base_bdevs": 2, 00:19:10.704 "num_base_bdevs_discovered": 2, 00:19:10.704 "num_base_bdevs_operational": 2, 00:19:10.704 "base_bdevs_list": [ 00:19:10.704 { 00:19:10.704 "name": "pt1", 00:19:10.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.704 "is_configured": true, 00:19:10.704 "data_offset": 2048, 00:19:10.704 "data_size": 63488 00:19:10.704 }, 00:19:10.704 { 00:19:10.704 "name": "pt2", 00:19:10.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.704 "is_configured": true, 00:19:10.704 "data_offset": 2048, 00:19:10.704 "data_size": 63488 00:19:10.704 } 00:19:10.704 ] 00:19:10.704 }' 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.704 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:11.270 [2024-10-07 07:40:10.624951] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.270 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.270 "name": "raid_bdev1", 00:19:11.270 "aliases": [ 00:19:11.270 "0d9c03e1-942e-4609-8abc-dd270cd2a680" 00:19:11.270 ], 00:19:11.270 "product_name": "Raid Volume", 00:19:11.270 "block_size": 512, 00:19:11.270 "num_blocks": 126976, 00:19:11.270 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:11.270 "assigned_rate_limits": { 00:19:11.270 "rw_ios_per_sec": 0, 00:19:11.270 "rw_mbytes_per_sec": 0, 00:19:11.270 "r_mbytes_per_sec": 0, 00:19:11.270 "w_mbytes_per_sec": 0 00:19:11.270 }, 00:19:11.270 "claimed": false, 00:19:11.270 "zoned": false, 00:19:11.270 "supported_io_types": { 00:19:11.270 "read": true, 00:19:11.270 "write": true, 00:19:11.270 "unmap": true, 00:19:11.270 "flush": true, 00:19:11.270 "reset": true, 00:19:11.270 "nvme_admin": false, 00:19:11.270 "nvme_io": false, 00:19:11.270 "nvme_io_md": false, 00:19:11.270 "write_zeroes": true, 00:19:11.270 "zcopy": false, 00:19:11.270 "get_zone_info": false, 00:19:11.270 "zone_management": false, 00:19:11.270 "zone_append": false, 00:19:11.270 "compare": false, 00:19:11.270 "compare_and_write": false, 00:19:11.270 "abort": false, 00:19:11.271 "seek_hole": false, 00:19:11.271 "seek_data": false, 00:19:11.271 "copy": false, 00:19:11.271 "nvme_iov_md": false 00:19:11.271 }, 00:19:11.271 "memory_domains": [ 00:19:11.271 { 00:19:11.271 "dma_device_id": "system", 00:19:11.271 "dma_device_type": 1 00:19:11.271 }, 00:19:11.271 { 00:19:11.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.271 "dma_device_type": 2 00:19:11.271 }, 00:19:11.271 { 00:19:11.271 "dma_device_id": "system", 00:19:11.271 "dma_device_type": 1 00:19:11.271 }, 00:19:11.271 { 00:19:11.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.271 "dma_device_type": 2 00:19:11.271 } 00:19:11.271 ], 00:19:11.271 "driver_specific": { 00:19:11.271 "raid": { 00:19:11.271 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:11.271 "strip_size_kb": 64, 00:19:11.271 "state": "online", 00:19:11.271 "raid_level": "concat", 00:19:11.271 "superblock": true, 00:19:11.271 "num_base_bdevs": 2, 00:19:11.271 "num_base_bdevs_discovered": 2, 00:19:11.271 "num_base_bdevs_operational": 2, 00:19:11.271 "base_bdevs_list": [ 00:19:11.271 { 00:19:11.271 "name": "pt1", 00:19:11.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.271 "is_configured": true, 00:19:11.271 "data_offset": 2048, 00:19:11.271 "data_size": 63488 00:19:11.271 }, 00:19:11.271 { 00:19:11.271 "name": "pt2", 00:19:11.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.271 "is_configured": true, 00:19:11.271 "data_offset": 2048, 00:19:11.271 "data_size": 63488 00:19:11.271 } 00:19:11.271 ] 00:19:11.271 } 00:19:11.271 } 00:19:11.271 }' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:11.271 pt2' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.271 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 [2024-10-07 07:40:10.840954] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0d9c03e1-942e-4609-8abc-dd270cd2a680 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0d9c03e1-942e-4609-8abc-dd270cd2a680 ']' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 [2024-10-07 07:40:10.884665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.529 [2024-10-07 07:40:10.884873] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.529 [2024-10-07 07:40:10.885015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.529 [2024-10-07 07:40:10.885071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.529 [2024-10-07 07:40:10.885090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:11.529 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:19:11.530 07:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.530 [2024-10-07 07:40:11.008722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:11.530 [2024-10-07 07:40:11.011295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:11.530 [2024-10-07 07:40:11.011517] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:11.530 [2024-10-07 07:40:11.011742] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:11.530 [2024-10-07 07:40:11.011886] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.530 [2024-10-07 07:40:11.011931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:11.530 request: 00:19:11.530 { 00:19:11.530 "name": "raid_bdev1", 00:19:11.530 "raid_level": "concat", 00:19:11.530 "base_bdevs": [ 00:19:11.530 "malloc1", 00:19:11.530 "malloc2" 00:19:11.530 ], 00:19:11.530 "strip_size_kb": 64, 00:19:11.530 "superblock": false, 00:19:11.530 "method": "bdev_raid_create", 00:19:11.530 "req_id": 1 00:19:11.530 } 00:19:11.530 Got JSON-RPC error response 00:19:11.530 response: 00:19:11.530 { 00:19:11.530 "code": -17, 00:19:11.530 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:11.530 } 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.530 [2024-10-07 07:40:11.072753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:11.530 [2024-10-07 07:40:11.073036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.530 [2024-10-07 07:40:11.073104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:11.530 [2024-10-07 07:40:11.073207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.530 [2024-10-07 07:40:11.076018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.530 [2024-10-07 07:40:11.076209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:11.530 pt1 00:19:11.530 [2024-10-07 07:40:11.076454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:11.530 [2024-10-07 07:40:11.076543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:11.530 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.788 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:11.788 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.788 "name": "raid_bdev1", 00:19:11.788 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:11.788 "strip_size_kb": 64, 00:19:11.788 "state": "configuring", 00:19:11.788 "raid_level": "concat", 00:19:11.788 "superblock": true, 00:19:11.788 "num_base_bdevs": 2, 00:19:11.788 "num_base_bdevs_discovered": 1, 00:19:11.788 "num_base_bdevs_operational": 2, 00:19:11.788 "base_bdevs_list": [ 00:19:11.788 { 00:19:11.788 "name": "pt1", 00:19:11.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.788 "is_configured": true, 00:19:11.788 "data_offset": 2048, 00:19:11.788 "data_size": 63488 00:19:11.788 }, 00:19:11.788 { 00:19:11.788 "name": null, 00:19:11.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.788 "is_configured": false, 00:19:11.788 "data_offset": 2048, 00:19:11.788 "data_size": 63488 00:19:11.788 } 00:19:11.788 ] 00:19:11.788 }' 00:19:11.788 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.788 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.046 [2024-10-07 07:40:11.532932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:12.046 [2024-10-07 07:40:11.533156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.046 [2024-10-07 07:40:11.533223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:12.046 [2024-10-07 07:40:11.533327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.046 [2024-10-07 07:40:11.533970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.046 [2024-10-07 07:40:11.534129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:12.046 [2024-10-07 07:40:11.534238] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:12.046 [2024-10-07 07:40:11.534271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.046 [2024-10-07 07:40:11.534397] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:12.046 [2024-10-07 07:40:11.534413] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:12.046 [2024-10-07 07:40:11.534692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:12.046 [2024-10-07 07:40:11.534877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:12.046 [2024-10-07 07:40:11.534890] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:12.046 [2024-10-07 07:40:11.535040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.046 pt2 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.046 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.047 "name": "raid_bdev1", 00:19:12.047 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:12.047 "strip_size_kb": 64, 00:19:12.047 "state": "online", 00:19:12.047 "raid_level": "concat", 00:19:12.047 "superblock": true, 00:19:12.047 "num_base_bdevs": 2, 00:19:12.047 "num_base_bdevs_discovered": 2, 00:19:12.047 "num_base_bdevs_operational": 2, 00:19:12.047 "base_bdevs_list": [ 00:19:12.047 { 00:19:12.047 "name": "pt1", 00:19:12.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:12.047 "is_configured": true, 00:19:12.047 "data_offset": 2048, 00:19:12.047 "data_size": 63488 00:19:12.047 }, 00:19:12.047 { 00:19:12.047 "name": "pt2", 00:19:12.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.047 "is_configured": true, 00:19:12.047 "data_offset": 2048, 00:19:12.047 "data_size": 63488 00:19:12.047 } 00:19:12.047 ] 00:19:12.047 }' 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.047 07:40:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.613 [2024-10-07 07:40:12.013340] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.613 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.613 "name": "raid_bdev1", 00:19:12.613 "aliases": [ 00:19:12.613 "0d9c03e1-942e-4609-8abc-dd270cd2a680" 00:19:12.613 ], 00:19:12.613 "product_name": "Raid Volume", 00:19:12.613 "block_size": 512, 00:19:12.613 "num_blocks": 126976, 00:19:12.613 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:12.613 "assigned_rate_limits": { 00:19:12.613 "rw_ios_per_sec": 0, 00:19:12.613 "rw_mbytes_per_sec": 0, 00:19:12.613 "r_mbytes_per_sec": 0, 00:19:12.613 "w_mbytes_per_sec": 0 00:19:12.613 }, 00:19:12.613 "claimed": false, 00:19:12.613 "zoned": false, 00:19:12.613 "supported_io_types": { 00:19:12.613 "read": true, 00:19:12.613 "write": true, 00:19:12.613 "unmap": true, 00:19:12.613 "flush": true, 00:19:12.613 "reset": true, 00:19:12.613 "nvme_admin": false, 00:19:12.613 "nvme_io": false, 00:19:12.613 "nvme_io_md": false, 00:19:12.613 "write_zeroes": true, 00:19:12.613 "zcopy": false, 00:19:12.613 "get_zone_info": false, 00:19:12.613 "zone_management": false, 00:19:12.613 "zone_append": false, 00:19:12.613 "compare": false, 00:19:12.613 "compare_and_write": false, 00:19:12.613 "abort": false, 00:19:12.613 "seek_hole": false, 00:19:12.613 "seek_data": false, 00:19:12.613 "copy": false, 00:19:12.613 "nvme_iov_md": false 00:19:12.613 }, 00:19:12.613 "memory_domains": [ 00:19:12.613 { 00:19:12.613 "dma_device_id": "system", 00:19:12.613 "dma_device_type": 1 00:19:12.613 }, 00:19:12.613 { 00:19:12.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.613 "dma_device_type": 2 00:19:12.613 }, 00:19:12.613 { 00:19:12.613 "dma_device_id": "system", 00:19:12.613 "dma_device_type": 1 00:19:12.613 }, 00:19:12.613 { 00:19:12.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.613 "dma_device_type": 2 00:19:12.613 } 00:19:12.613 ], 00:19:12.613 "driver_specific": { 00:19:12.613 "raid": { 00:19:12.613 "uuid": "0d9c03e1-942e-4609-8abc-dd270cd2a680", 00:19:12.613 "strip_size_kb": 64, 00:19:12.613 "state": "online", 00:19:12.613 "raid_level": "concat", 00:19:12.613 "superblock": true, 00:19:12.613 "num_base_bdevs": 2, 00:19:12.613 "num_base_bdevs_discovered": 2, 00:19:12.613 "num_base_bdevs_operational": 2, 00:19:12.614 "base_bdevs_list": [ 00:19:12.614 { 00:19:12.614 "name": "pt1", 00:19:12.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:12.614 "is_configured": true, 00:19:12.614 "data_offset": 2048, 00:19:12.614 "data_size": 63488 00:19:12.614 }, 00:19:12.614 { 00:19:12.614 "name": "pt2", 00:19:12.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.614 "is_configured": true, 00:19:12.614 "data_offset": 2048, 00:19:12.614 "data_size": 63488 00:19:12.614 } 00:19:12.614 ] 00:19:12.614 } 00:19:12.614 } 00:19:12.614 }' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:12.614 pt2' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.614 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.873 [2024-10-07 07:40:12.265374] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0d9c03e1-942e-4609-8abc-dd270cd2a680 '!=' 0d9c03e1-942e-4609-8abc-dd270cd2a680 ']' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62232 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 62232 ']' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 62232 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 62232 00:19:12.873 killing process with pid 62232 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 62232' 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 62232 00:19:12.873 [2024-10-07 07:40:12.336735] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.873 [2024-10-07 07:40:12.336848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.873 [2024-10-07 07:40:12.336903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.873 [2024-10-07 07:40:12.336917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:12.873 07:40:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 62232 00:19:13.148 [2024-10-07 07:40:12.548992] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.526 07:40:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:14.526 ************************************ 00:19:14.526 END TEST raid_superblock_test 00:19:14.526 ************************************ 00:19:14.526 00:19:14.526 real 0m4.903s 00:19:14.526 user 0m6.875s 00:19:14.526 sys 0m0.827s 00:19:14.526 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:14.526 07:40:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 07:40:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:19:14.526 07:40:13 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:14.526 07:40:13 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:14.526 07:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 ************************************ 00:19:14.526 START TEST raid_read_error_test 00:19:14.526 ************************************ 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 2 read 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7idoOnEHtP 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62439 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62439 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 62439 ']' 00:19:14.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:14.526 07:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 [2024-10-07 07:40:14.064924] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:14.526 [2024-10-07 07:40:14.065098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62439 ] 00:19:14.785 [2024-10-07 07:40:14.247506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.045 [2024-10-07 07:40:14.468238] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.302 [2024-10-07 07:40:14.687298] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.302 [2024-10-07 07:40:14.687338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.560 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:15.560 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:19:15.560 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.560 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 BaseBdev1_malloc 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 true 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 [2024-10-07 07:40:14.988628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:15.561 [2024-10-07 07:40:14.988701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.561 [2024-10-07 07:40:14.988737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:15.561 [2024-10-07 07:40:14.988754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.561 [2024-10-07 07:40:14.991400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.561 [2024-10-07 07:40:14.991448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.561 BaseBdev1 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 BaseBdev2_malloc 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 true 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 [2024-10-07 07:40:15.057258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:15.561 [2024-10-07 07:40:15.057467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.561 [2024-10-07 07:40:15.057530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:15.561 [2024-10-07 07:40:15.057550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.561 [2024-10-07 07:40:15.060234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.561 [2024-10-07 07:40:15.060281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:15.561 BaseBdev2 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 [2024-10-07 07:40:15.065348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.561 [2024-10-07 07:40:15.067686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.561 [2024-10-07 07:40:15.068037] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:15.561 [2024-10-07 07:40:15.068159] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:15.561 [2024-10-07 07:40:15.068506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:15.561 [2024-10-07 07:40:15.068858] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:15.561 [2024-10-07 07:40:15.068972] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:15.561 [2024-10-07 07:40:15.069367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.561 "name": "raid_bdev1", 00:19:15.561 "uuid": "ec77bab8-ee75-41b0-93f0-bc40f2c8f5f4", 00:19:15.561 "strip_size_kb": 64, 00:19:15.561 "state": "online", 00:19:15.561 "raid_level": "concat", 00:19:15.561 "superblock": true, 00:19:15.561 "num_base_bdevs": 2, 00:19:15.561 "num_base_bdevs_discovered": 2, 00:19:15.561 "num_base_bdevs_operational": 2, 00:19:15.561 "base_bdevs_list": [ 00:19:15.561 { 00:19:15.561 "name": "BaseBdev1", 00:19:15.561 "uuid": "d1fba709-d15c-5e91-a8b6-0d88296b6324", 00:19:15.561 "is_configured": true, 00:19:15.561 "data_offset": 2048, 00:19:15.561 "data_size": 63488 00:19:15.561 }, 00:19:15.561 { 00:19:15.561 "name": "BaseBdev2", 00:19:15.561 "uuid": "e1c8e96a-bab4-51b5-8af0-537dbcaaaa34", 00:19:15.561 "is_configured": true, 00:19:15.561 "data_offset": 2048, 00:19:15.561 "data_size": 63488 00:19:15.561 } 00:19:15.561 ] 00:19:15.561 }' 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.561 07:40:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.132 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:16.132 07:40:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:16.132 [2024-10-07 07:40:15.626960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.070 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:17.071 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.071 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:17.071 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.071 "name": "raid_bdev1", 00:19:17.071 "uuid": "ec77bab8-ee75-41b0-93f0-bc40f2c8f5f4", 00:19:17.071 "strip_size_kb": 64, 00:19:17.071 "state": "online", 00:19:17.071 "raid_level": "concat", 00:19:17.071 "superblock": true, 00:19:17.071 "num_base_bdevs": 2, 00:19:17.071 "num_base_bdevs_discovered": 2, 00:19:17.071 "num_base_bdevs_operational": 2, 00:19:17.071 "base_bdevs_list": [ 00:19:17.071 { 00:19:17.071 "name": "BaseBdev1", 00:19:17.071 "uuid": "d1fba709-d15c-5e91-a8b6-0d88296b6324", 00:19:17.071 "is_configured": true, 00:19:17.071 "data_offset": 2048, 00:19:17.071 "data_size": 63488 00:19:17.071 }, 00:19:17.071 { 00:19:17.071 "name": "BaseBdev2", 00:19:17.071 "uuid": "e1c8e96a-bab4-51b5-8af0-537dbcaaaa34", 00:19:17.071 "is_configured": true, 00:19:17.071 "data_offset": 2048, 00:19:17.071 "data_size": 63488 00:19:17.071 } 00:19:17.071 ] 00:19:17.071 }' 00:19:17.071 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.071 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.638 [2024-10-07 07:40:16.968260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.638 [2024-10-07 07:40:16.968425] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.638 [2024-10-07 07:40:16.971086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.638 [2024-10-07 07:40:16.971128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.638 [2024-10-07 07:40:16.971159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.638 [2024-10-07 07:40:16.971173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:17.638 { 00:19:17.638 "results": [ 00:19:17.638 { 00:19:17.638 "job": "raid_bdev1", 00:19:17.638 "core_mask": "0x1", 00:19:17.638 "workload": "randrw", 00:19:17.638 "percentage": 50, 00:19:17.638 "status": "finished", 00:19:17.638 "queue_depth": 1, 00:19:17.638 "io_size": 131072, 00:19:17.638 "runtime": 1.339106, 00:19:17.638 "iops": 15794.866127102709, 00:19:17.638 "mibps": 1974.3582658878386, 00:19:17.638 "io_failed": 1, 00:19:17.638 "io_timeout": 0, 00:19:17.638 "avg_latency_us": 87.51410705280601, 00:19:17.638 "min_latency_us": 26.940952380952382, 00:19:17.638 "max_latency_us": 1466.7580952380952 00:19:17.638 } 00:19:17.638 ], 00:19:17.638 "core_count": 1 00:19:17.638 } 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62439 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 62439 ']' 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 62439 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:17.638 07:40:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 62439 00:19:17.638 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:17.638 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:17.638 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 62439' 00:19:17.638 killing process with pid 62439 00:19:17.638 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 62439 00:19:17.638 [2024-10-07 07:40:17.021386] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.638 07:40:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 62439 00:19:17.638 [2024-10-07 07:40:17.164774] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7idoOnEHtP 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:19:19.018 00:19:19.018 real 0m4.621s 00:19:19.018 user 0m5.493s 00:19:19.018 sys 0m0.619s 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:19.018 07:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.018 ************************************ 00:19:19.018 END TEST raid_read_error_test 00:19:19.018 ************************************ 00:19:19.276 07:40:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:19:19.276 07:40:18 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:19.276 07:40:18 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:19.276 07:40:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 ************************************ 00:19:19.276 START TEST raid_write_error_test 00:19:19.276 ************************************ 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 2 write 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ejBSvmzfCN 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62589 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62589 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 62589 ']' 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:19.276 07:40:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 [2024-10-07 07:40:18.749145] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:19.276 [2024-10-07 07:40:18.749596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62589 ] 00:19:19.535 [2024-10-07 07:40:18.944469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.794 [2024-10-07 07:40:19.166533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.052 [2024-10-07 07:40:19.388148] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.052 [2024-10-07 07:40:19.388200] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 BaseBdev1_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 true 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 [2024-10-07 07:40:19.740065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:20.312 [2024-10-07 07:40:19.740260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.312 [2024-10-07 07:40:19.740371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:20.312 [2024-10-07 07:40:19.740495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.312 [2024-10-07 07:40:19.743189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.312 [2024-10-07 07:40:19.743234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.312 BaseBdev1 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 BaseBdev2_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 true 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 [2024-10-07 07:40:19.816248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:20.312 [2024-10-07 07:40:19.816439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.312 [2024-10-07 07:40:19.816496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:20.312 [2024-10-07 07:40:19.816514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.312 [2024-10-07 07:40:19.818922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.312 [2024-10-07 07:40:19.818967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.312 BaseBdev2 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 [2024-10-07 07:40:19.824342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.312 [2024-10-07 07:40:19.826577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.312 [2024-10-07 07:40:19.826801] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.312 [2024-10-07 07:40:19.826819] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:20.312 [2024-10-07 07:40:19.827090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:20.312 [2024-10-07 07:40:19.827258] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.312 [2024-10-07 07:40:19.827275] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:20.312 [2024-10-07 07:40:19.827442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.312 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:20.572 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.572 "name": "raid_bdev1", 00:19:20.572 "uuid": "c5e45607-e8fd-4d15-be23-75b4a2463833", 00:19:20.572 "strip_size_kb": 64, 00:19:20.572 "state": "online", 00:19:20.572 "raid_level": "concat", 00:19:20.572 "superblock": true, 00:19:20.572 "num_base_bdevs": 2, 00:19:20.572 "num_base_bdevs_discovered": 2, 00:19:20.572 "num_base_bdevs_operational": 2, 00:19:20.572 "base_bdevs_list": [ 00:19:20.572 { 00:19:20.572 "name": "BaseBdev1", 00:19:20.572 "uuid": "a062c227-a236-502c-991a-c059cbc97014", 00:19:20.572 "is_configured": true, 00:19:20.572 "data_offset": 2048, 00:19:20.572 "data_size": 63488 00:19:20.572 }, 00:19:20.572 { 00:19:20.572 "name": "BaseBdev2", 00:19:20.572 "uuid": "34ed59cd-ec6d-5373-b0c6-fd21732eccae", 00:19:20.572 "is_configured": true, 00:19:20.572 "data_offset": 2048, 00:19:20.572 "data_size": 63488 00:19:20.572 } 00:19:20.572 ] 00:19:20.572 }' 00:19:20.572 07:40:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.572 07:40:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.832 07:40:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:20.832 07:40:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:20.832 [2024-10-07 07:40:20.361945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.769 "name": "raid_bdev1", 00:19:21.769 "uuid": "c5e45607-e8fd-4d15-be23-75b4a2463833", 00:19:21.769 "strip_size_kb": 64, 00:19:21.769 "state": "online", 00:19:21.769 "raid_level": "concat", 00:19:21.769 "superblock": true, 00:19:21.769 "num_base_bdevs": 2, 00:19:21.769 "num_base_bdevs_discovered": 2, 00:19:21.769 "num_base_bdevs_operational": 2, 00:19:21.769 "base_bdevs_list": [ 00:19:21.769 { 00:19:21.769 "name": "BaseBdev1", 00:19:21.769 "uuid": "a062c227-a236-502c-991a-c059cbc97014", 00:19:21.769 "is_configured": true, 00:19:21.769 "data_offset": 2048, 00:19:21.769 "data_size": 63488 00:19:21.769 }, 00:19:21.769 { 00:19:21.769 "name": "BaseBdev2", 00:19:21.769 "uuid": "34ed59cd-ec6d-5373-b0c6-fd21732eccae", 00:19:21.769 "is_configured": true, 00:19:21.769 "data_offset": 2048, 00:19:21.769 "data_size": 63488 00:19:21.769 } 00:19:21.769 ] 00:19:21.769 }' 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.769 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 [2024-10-07 07:40:21.703177] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.337 [2024-10-07 07:40:21.703225] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.337 [2024-10-07 07:40:21.706146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.337 [2024-10-07 07:40:21.706195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.337 [2024-10-07 07:40:21.706229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.337 [2024-10-07 07:40:21.706243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:22.337 { 00:19:22.337 "results": [ 00:19:22.337 { 00:19:22.337 "job": "raid_bdev1", 00:19:22.337 "core_mask": "0x1", 00:19:22.337 "workload": "randrw", 00:19:22.337 "percentage": 50, 00:19:22.337 "status": "finished", 00:19:22.337 "queue_depth": 1, 00:19:22.337 "io_size": 131072, 00:19:22.337 "runtime": 1.33901, 00:19:22.337 "iops": 15358.361774744028, 00:19:22.337 "mibps": 1919.7952218430034, 00:19:22.337 "io_failed": 1, 00:19:22.337 "io_timeout": 0, 00:19:22.337 "avg_latency_us": 89.96970348656822, 00:19:22.337 "min_latency_us": 27.30666666666667, 00:19:22.337 "max_latency_us": 1575.9847619047619 00:19:22.337 } 00:19:22.337 ], 00:19:22.337 "core_count": 1 00:19:22.337 } 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62589 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 62589 ']' 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 62589 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 62589 00:19:22.337 killing process with pid 62589 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 62589' 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 62589 00:19:22.337 07:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 62589 00:19:22.337 [2024-10-07 07:40:21.750352] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.647 [2024-10-07 07:40:21.907757] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ejBSvmzfCN 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:24.027 ************************************ 00:19:24.027 END TEST raid_write_error_test 00:19:24.027 ************************************ 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:19:24.027 00:19:24.027 real 0m4.692s 00:19:24.027 user 0m5.590s 00:19:24.027 sys 0m0.630s 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:24.027 07:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.027 07:40:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:24.027 07:40:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:24.027 07:40:23 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:24.027 07:40:23 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:24.027 07:40:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.027 ************************************ 00:19:24.027 START TEST raid_state_function_test 00:19:24.027 ************************************ 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 2 false 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62734 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62734' 00:19:24.027 Process raid pid: 62734 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62734 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 62734 ']' 00:19:24.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:24.027 07:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.027 [2024-10-07 07:40:23.491106] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:24.027 [2024-10-07 07:40:23.491289] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.286 [2024-10-07 07:40:23.676970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.544 [2024-10-07 07:40:23.902833] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.802 [2024-10-07 07:40:24.113895] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.802 [2024-10-07 07:40:24.113950] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.061 [2024-10-07 07:40:24.399927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.061 [2024-10-07 07:40:24.399980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.061 [2024-10-07 07:40:24.399991] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.061 [2024-10-07 07:40:24.400006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.061 "name": "Existed_Raid", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.061 "strip_size_kb": 0, 00:19:25.061 "state": "configuring", 00:19:25.061 "raid_level": "raid1", 00:19:25.061 "superblock": false, 00:19:25.061 "num_base_bdevs": 2, 00:19:25.061 "num_base_bdevs_discovered": 0, 00:19:25.061 "num_base_bdevs_operational": 2, 00:19:25.061 "base_bdevs_list": [ 00:19:25.061 { 00:19:25.061 "name": "BaseBdev1", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.061 "is_configured": false, 00:19:25.061 "data_offset": 0, 00:19:25.061 "data_size": 0 00:19:25.061 }, 00:19:25.061 { 00:19:25.061 "name": "BaseBdev2", 00:19:25.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.061 "is_configured": false, 00:19:25.061 "data_offset": 0, 00:19:25.061 "data_size": 0 00:19:25.061 } 00:19:25.061 ] 00:19:25.061 }' 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.061 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 [2024-10-07 07:40:24.811948] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.321 [2024-10-07 07:40:24.811994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 [2024-10-07 07:40:24.823967] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.321 [2024-10-07 07:40:24.824012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.321 [2024-10-07 07:40:24.824023] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.321 [2024-10-07 07:40:24.824040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 [2024-10-07 07:40:24.878797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.321 BaseBdev1 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:25.321 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.581 [ 00:19:25.581 { 00:19:25.581 "name": "BaseBdev1", 00:19:25.581 "aliases": [ 00:19:25.581 "20728ab4-6d10-4005-981f-fae8b8d3939a" 00:19:25.581 ], 00:19:25.581 "product_name": "Malloc disk", 00:19:25.581 "block_size": 512, 00:19:25.581 "num_blocks": 65536, 00:19:25.581 "uuid": "20728ab4-6d10-4005-981f-fae8b8d3939a", 00:19:25.581 "assigned_rate_limits": { 00:19:25.581 "rw_ios_per_sec": 0, 00:19:25.581 "rw_mbytes_per_sec": 0, 00:19:25.581 "r_mbytes_per_sec": 0, 00:19:25.581 "w_mbytes_per_sec": 0 00:19:25.581 }, 00:19:25.581 "claimed": true, 00:19:25.581 "claim_type": "exclusive_write", 00:19:25.581 "zoned": false, 00:19:25.581 "supported_io_types": { 00:19:25.581 "read": true, 00:19:25.581 "write": true, 00:19:25.581 "unmap": true, 00:19:25.581 "flush": true, 00:19:25.581 "reset": true, 00:19:25.581 "nvme_admin": false, 00:19:25.581 "nvme_io": false, 00:19:25.581 "nvme_io_md": false, 00:19:25.581 "write_zeroes": true, 00:19:25.581 "zcopy": true, 00:19:25.581 "get_zone_info": false, 00:19:25.581 "zone_management": false, 00:19:25.581 "zone_append": false, 00:19:25.581 "compare": false, 00:19:25.581 "compare_and_write": false, 00:19:25.581 "abort": true, 00:19:25.581 "seek_hole": false, 00:19:25.581 "seek_data": false, 00:19:25.581 "copy": true, 00:19:25.581 "nvme_iov_md": false 00:19:25.581 }, 00:19:25.581 "memory_domains": [ 00:19:25.581 { 00:19:25.581 "dma_device_id": "system", 00:19:25.581 "dma_device_type": 1 00:19:25.581 }, 00:19:25.581 { 00:19:25.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.581 "dma_device_type": 2 00:19:25.581 } 00:19:25.581 ], 00:19:25.581 "driver_specific": {} 00:19:25.581 } 00:19:25.581 ] 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.581 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.582 "name": "Existed_Raid", 00:19:25.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.582 "strip_size_kb": 0, 00:19:25.582 "state": "configuring", 00:19:25.582 "raid_level": "raid1", 00:19:25.582 "superblock": false, 00:19:25.582 "num_base_bdevs": 2, 00:19:25.582 "num_base_bdevs_discovered": 1, 00:19:25.582 "num_base_bdevs_operational": 2, 00:19:25.582 "base_bdevs_list": [ 00:19:25.582 { 00:19:25.582 "name": "BaseBdev1", 00:19:25.582 "uuid": "20728ab4-6d10-4005-981f-fae8b8d3939a", 00:19:25.582 "is_configured": true, 00:19:25.582 "data_offset": 0, 00:19:25.582 "data_size": 65536 00:19:25.582 }, 00:19:25.582 { 00:19:25.582 "name": "BaseBdev2", 00:19:25.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.582 "is_configured": false, 00:19:25.582 "data_offset": 0, 00:19:25.582 "data_size": 0 00:19:25.582 } 00:19:25.582 ] 00:19:25.582 }' 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.582 07:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.840 [2024-10-07 07:40:25.326958] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.840 [2024-10-07 07:40:25.327018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.840 [2024-10-07 07:40:25.334981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.840 [2024-10-07 07:40:25.337249] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.840 [2024-10-07 07:40:25.337300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:25.840 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.840 "name": "Existed_Raid", 00:19:25.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.840 "strip_size_kb": 0, 00:19:25.840 "state": "configuring", 00:19:25.841 "raid_level": "raid1", 00:19:25.841 "superblock": false, 00:19:25.841 "num_base_bdevs": 2, 00:19:25.841 "num_base_bdevs_discovered": 1, 00:19:25.841 "num_base_bdevs_operational": 2, 00:19:25.841 "base_bdevs_list": [ 00:19:25.841 { 00:19:25.841 "name": "BaseBdev1", 00:19:25.841 "uuid": "20728ab4-6d10-4005-981f-fae8b8d3939a", 00:19:25.841 "is_configured": true, 00:19:25.841 "data_offset": 0, 00:19:25.841 "data_size": 65536 00:19:25.841 }, 00:19:25.841 { 00:19:25.841 "name": "BaseBdev2", 00:19:25.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.841 "is_configured": false, 00:19:25.841 "data_offset": 0, 00:19:25.841 "data_size": 0 00:19:25.841 } 00:19:25.841 ] 00:19:25.841 }' 00:19:25.841 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.841 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 [2024-10-07 07:40:25.807536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.419 [2024-10-07 07:40:25.807594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.419 [2024-10-07 07:40:25.807607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:26.419 [2024-10-07 07:40:25.807904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:26.419 [2024-10-07 07:40:25.808071] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.419 [2024-10-07 07:40:25.808092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:26.419 [2024-10-07 07:40:25.808357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.419 BaseBdev2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 [ 00:19:26.419 { 00:19:26.419 "name": "BaseBdev2", 00:19:26.419 "aliases": [ 00:19:26.419 "1b66a46d-2ac2-44aa-8891-db27bba772b2" 00:19:26.419 ], 00:19:26.419 "product_name": "Malloc disk", 00:19:26.419 "block_size": 512, 00:19:26.419 "num_blocks": 65536, 00:19:26.419 "uuid": "1b66a46d-2ac2-44aa-8891-db27bba772b2", 00:19:26.419 "assigned_rate_limits": { 00:19:26.419 "rw_ios_per_sec": 0, 00:19:26.419 "rw_mbytes_per_sec": 0, 00:19:26.419 "r_mbytes_per_sec": 0, 00:19:26.419 "w_mbytes_per_sec": 0 00:19:26.419 }, 00:19:26.419 "claimed": true, 00:19:26.419 "claim_type": "exclusive_write", 00:19:26.419 "zoned": false, 00:19:26.419 "supported_io_types": { 00:19:26.419 "read": true, 00:19:26.419 "write": true, 00:19:26.419 "unmap": true, 00:19:26.419 "flush": true, 00:19:26.419 "reset": true, 00:19:26.419 "nvme_admin": false, 00:19:26.419 "nvme_io": false, 00:19:26.419 "nvme_io_md": false, 00:19:26.419 "write_zeroes": true, 00:19:26.419 "zcopy": true, 00:19:26.419 "get_zone_info": false, 00:19:26.419 "zone_management": false, 00:19:26.419 "zone_append": false, 00:19:26.419 "compare": false, 00:19:26.419 "compare_and_write": false, 00:19:26.419 "abort": true, 00:19:26.419 "seek_hole": false, 00:19:26.419 "seek_data": false, 00:19:26.419 "copy": true, 00:19:26.419 "nvme_iov_md": false 00:19:26.419 }, 00:19:26.419 "memory_domains": [ 00:19:26.419 { 00:19:26.419 "dma_device_id": "system", 00:19:26.419 "dma_device_type": 1 00:19:26.419 }, 00:19:26.419 { 00:19:26.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.419 "dma_device_type": 2 00:19:26.419 } 00:19:26.419 ], 00:19:26.419 "driver_specific": {} 00:19:26.419 } 00:19:26.419 ] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.419 "name": "Existed_Raid", 00:19:26.419 "uuid": "93f5eeee-fed3-4787-a652-340e227e6764", 00:19:26.419 "strip_size_kb": 0, 00:19:26.419 "state": "online", 00:19:26.419 "raid_level": "raid1", 00:19:26.419 "superblock": false, 00:19:26.419 "num_base_bdevs": 2, 00:19:26.419 "num_base_bdevs_discovered": 2, 00:19:26.419 "num_base_bdevs_operational": 2, 00:19:26.419 "base_bdevs_list": [ 00:19:26.419 { 00:19:26.419 "name": "BaseBdev1", 00:19:26.419 "uuid": "20728ab4-6d10-4005-981f-fae8b8d3939a", 00:19:26.419 "is_configured": true, 00:19:26.419 "data_offset": 0, 00:19:26.419 "data_size": 65536 00:19:26.419 }, 00:19:26.419 { 00:19:26.419 "name": "BaseBdev2", 00:19:26.419 "uuid": "1b66a46d-2ac2-44aa-8891-db27bba772b2", 00:19:26.419 "is_configured": true, 00:19:26.419 "data_offset": 0, 00:19:26.419 "data_size": 65536 00:19:26.419 } 00:19:26.419 ] 00:19:26.419 }' 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.419 07:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:26.988 [2024-10-07 07:40:26.292050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:26.988 "name": "Existed_Raid", 00:19:26.988 "aliases": [ 00:19:26.988 "93f5eeee-fed3-4787-a652-340e227e6764" 00:19:26.988 ], 00:19:26.988 "product_name": "Raid Volume", 00:19:26.988 "block_size": 512, 00:19:26.988 "num_blocks": 65536, 00:19:26.988 "uuid": "93f5eeee-fed3-4787-a652-340e227e6764", 00:19:26.988 "assigned_rate_limits": { 00:19:26.988 "rw_ios_per_sec": 0, 00:19:26.988 "rw_mbytes_per_sec": 0, 00:19:26.988 "r_mbytes_per_sec": 0, 00:19:26.988 "w_mbytes_per_sec": 0 00:19:26.988 }, 00:19:26.988 "claimed": false, 00:19:26.988 "zoned": false, 00:19:26.988 "supported_io_types": { 00:19:26.988 "read": true, 00:19:26.988 "write": true, 00:19:26.988 "unmap": false, 00:19:26.988 "flush": false, 00:19:26.988 "reset": true, 00:19:26.988 "nvme_admin": false, 00:19:26.988 "nvme_io": false, 00:19:26.988 "nvme_io_md": false, 00:19:26.988 "write_zeroes": true, 00:19:26.988 "zcopy": false, 00:19:26.988 "get_zone_info": false, 00:19:26.988 "zone_management": false, 00:19:26.988 "zone_append": false, 00:19:26.988 "compare": false, 00:19:26.988 "compare_and_write": false, 00:19:26.988 "abort": false, 00:19:26.988 "seek_hole": false, 00:19:26.988 "seek_data": false, 00:19:26.988 "copy": false, 00:19:26.988 "nvme_iov_md": false 00:19:26.988 }, 00:19:26.988 "memory_domains": [ 00:19:26.988 { 00:19:26.988 "dma_device_id": "system", 00:19:26.988 "dma_device_type": 1 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.988 "dma_device_type": 2 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "dma_device_id": "system", 00:19:26.988 "dma_device_type": 1 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.988 "dma_device_type": 2 00:19:26.988 } 00:19:26.988 ], 00:19:26.988 "driver_specific": { 00:19:26.988 "raid": { 00:19:26.988 "uuid": "93f5eeee-fed3-4787-a652-340e227e6764", 00:19:26.988 "strip_size_kb": 0, 00:19:26.988 "state": "online", 00:19:26.988 "raid_level": "raid1", 00:19:26.988 "superblock": false, 00:19:26.988 "num_base_bdevs": 2, 00:19:26.988 "num_base_bdevs_discovered": 2, 00:19:26.988 "num_base_bdevs_operational": 2, 00:19:26.988 "base_bdevs_list": [ 00:19:26.988 { 00:19:26.988 "name": "BaseBdev1", 00:19:26.988 "uuid": "20728ab4-6d10-4005-981f-fae8b8d3939a", 00:19:26.988 "is_configured": true, 00:19:26.988 "data_offset": 0, 00:19:26.988 "data_size": 65536 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "name": "BaseBdev2", 00:19:26.988 "uuid": "1b66a46d-2ac2-44aa-8891-db27bba772b2", 00:19:26.988 "is_configured": true, 00:19:26.988 "data_offset": 0, 00:19:26.988 "data_size": 65536 00:19:26.988 } 00:19:26.988 ] 00:19:26.988 } 00:19:26.988 } 00:19:26.988 }' 00:19:26.988 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:26.989 BaseBdev2' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:26.989 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.989 [2024-10-07 07:40:26.499836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.249 "name": "Existed_Raid", 00:19:27.249 "uuid": "93f5eeee-fed3-4787-a652-340e227e6764", 00:19:27.249 "strip_size_kb": 0, 00:19:27.249 "state": "online", 00:19:27.249 "raid_level": "raid1", 00:19:27.249 "superblock": false, 00:19:27.249 "num_base_bdevs": 2, 00:19:27.249 "num_base_bdevs_discovered": 1, 00:19:27.249 "num_base_bdevs_operational": 1, 00:19:27.249 "base_bdevs_list": [ 00:19:27.249 { 00:19:27.249 "name": null, 00:19:27.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.249 "is_configured": false, 00:19:27.249 "data_offset": 0, 00:19:27.249 "data_size": 65536 00:19:27.249 }, 00:19:27.249 { 00:19:27.249 "name": "BaseBdev2", 00:19:27.249 "uuid": "1b66a46d-2ac2-44aa-8891-db27bba772b2", 00:19:27.249 "is_configured": true, 00:19:27.249 "data_offset": 0, 00:19:27.249 "data_size": 65536 00:19:27.249 } 00:19:27.249 ] 00:19:27.249 }' 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.249 07:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.816 [2024-10-07 07:40:27.126682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:27.816 [2024-10-07 07:40:27.126809] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.816 [2024-10-07 07:40:27.235980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.816 [2024-10-07 07:40:27.236031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.816 [2024-10-07 07:40:27.236047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62734 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 62734 ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 62734 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 62734 00:19:27.816 killing process with pid 62734 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 62734' 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 62734 00:19:27.816 [2024-10-07 07:40:27.321154] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.816 07:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 62734 00:19:27.816 [2024-10-07 07:40:27.342038] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:29.717 00:19:29.717 real 0m5.450s 00:19:29.717 user 0m7.650s 00:19:29.717 sys 0m0.918s 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:29.717 ************************************ 00:19:29.717 END TEST raid_state_function_test 00:19:29.717 ************************************ 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.717 07:40:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:19:29.717 07:40:28 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:29.717 07:40:28 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:29.717 07:40:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.717 ************************************ 00:19:29.717 START TEST raid_state_function_test_sb 00:19:29.717 ************************************ 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 2 true 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:29.717 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62987 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:29.718 Process raid pid: 62987 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62987' 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62987 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 62987 ']' 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:29.718 07:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.718 [2024-10-07 07:40:29.009570] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:29.718 [2024-10-07 07:40:29.010026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.718 [2024-10-07 07:40:29.198033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.977 [2024-10-07 07:40:29.421752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.235 [2024-10-07 07:40:29.641928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.235 [2024-10-07 07:40:29.641972] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.495 [2024-10-07 07:40:29.990669] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.495 [2024-10-07 07:40:29.990950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.495 [2024-10-07 07:40:29.991052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.495 [2024-10-07 07:40:29.991105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:30.495 07:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.495 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:30.495 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.495 "name": "Existed_Raid", 00:19:30.495 "uuid": "65ff7dfd-2a93-4ae6-91d1-a23266205f91", 00:19:30.495 "strip_size_kb": 0, 00:19:30.495 "state": "configuring", 00:19:30.495 "raid_level": "raid1", 00:19:30.495 "superblock": true, 00:19:30.495 "num_base_bdevs": 2, 00:19:30.495 "num_base_bdevs_discovered": 0, 00:19:30.495 "num_base_bdevs_operational": 2, 00:19:30.495 "base_bdevs_list": [ 00:19:30.495 { 00:19:30.495 "name": "BaseBdev1", 00:19:30.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.495 "is_configured": false, 00:19:30.495 "data_offset": 0, 00:19:30.495 "data_size": 0 00:19:30.495 }, 00:19:30.495 { 00:19:30.495 "name": "BaseBdev2", 00:19:30.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.495 "is_configured": false, 00:19:30.495 "data_offset": 0, 00:19:30.495 "data_size": 0 00:19:30.495 } 00:19:30.495 ] 00:19:30.495 }' 00:19:30.495 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.495 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 [2024-10-07 07:40:30.442658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.065 [2024-10-07 07:40:30.442858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 [2024-10-07 07:40:30.454691] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.065 [2024-10-07 07:40:30.454773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.065 [2024-10-07 07:40:30.454785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.065 [2024-10-07 07:40:30.454803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 [2024-10-07 07:40:30.510578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.065 BaseBdev1 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 [ 00:19:31.065 { 00:19:31.065 "name": "BaseBdev1", 00:19:31.065 "aliases": [ 00:19:31.065 "dcd29263-ec49-457d-9e2f-b2e310a5b06d" 00:19:31.065 ], 00:19:31.065 "product_name": "Malloc disk", 00:19:31.065 "block_size": 512, 00:19:31.065 "num_blocks": 65536, 00:19:31.065 "uuid": "dcd29263-ec49-457d-9e2f-b2e310a5b06d", 00:19:31.065 "assigned_rate_limits": { 00:19:31.065 "rw_ios_per_sec": 0, 00:19:31.065 "rw_mbytes_per_sec": 0, 00:19:31.065 "r_mbytes_per_sec": 0, 00:19:31.065 "w_mbytes_per_sec": 0 00:19:31.065 }, 00:19:31.065 "claimed": true, 00:19:31.065 "claim_type": "exclusive_write", 00:19:31.065 "zoned": false, 00:19:31.065 "supported_io_types": { 00:19:31.065 "read": true, 00:19:31.065 "write": true, 00:19:31.065 "unmap": true, 00:19:31.065 "flush": true, 00:19:31.065 "reset": true, 00:19:31.065 "nvme_admin": false, 00:19:31.065 "nvme_io": false, 00:19:31.065 "nvme_io_md": false, 00:19:31.065 "write_zeroes": true, 00:19:31.065 "zcopy": true, 00:19:31.065 "get_zone_info": false, 00:19:31.065 "zone_management": false, 00:19:31.065 "zone_append": false, 00:19:31.065 "compare": false, 00:19:31.065 "compare_and_write": false, 00:19:31.065 "abort": true, 00:19:31.065 "seek_hole": false, 00:19:31.065 "seek_data": false, 00:19:31.065 "copy": true, 00:19:31.065 "nvme_iov_md": false 00:19:31.065 }, 00:19:31.065 "memory_domains": [ 00:19:31.065 { 00:19:31.065 "dma_device_id": "system", 00:19:31.065 "dma_device_type": 1 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.065 "dma_device_type": 2 00:19:31.065 } 00:19:31.065 ], 00:19:31.065 "driver_specific": {} 00:19:31.065 } 00:19:31.065 ] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.065 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.065 "name": "Existed_Raid", 00:19:31.065 "uuid": "757c2d91-016b-49c5-a065-f6279fcc7e91", 00:19:31.065 "strip_size_kb": 0, 00:19:31.065 "state": "configuring", 00:19:31.065 "raid_level": "raid1", 00:19:31.065 "superblock": true, 00:19:31.065 "num_base_bdevs": 2, 00:19:31.065 "num_base_bdevs_discovered": 1, 00:19:31.065 "num_base_bdevs_operational": 2, 00:19:31.065 "base_bdevs_list": [ 00:19:31.065 { 00:19:31.065 "name": "BaseBdev1", 00:19:31.065 "uuid": "dcd29263-ec49-457d-9e2f-b2e310a5b06d", 00:19:31.065 "is_configured": true, 00:19:31.065 "data_offset": 2048, 00:19:31.065 "data_size": 63488 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "name": "BaseBdev2", 00:19:31.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.066 "is_configured": false, 00:19:31.066 "data_offset": 0, 00:19:31.066 "data_size": 0 00:19:31.066 } 00:19:31.066 ] 00:19:31.066 }' 00:19:31.066 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.066 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.634 [2024-10-07 07:40:30.978771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.634 [2024-10-07 07:40:30.978839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.634 [2024-10-07 07:40:30.986827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.634 [2024-10-07 07:40:30.988955] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.634 [2024-10-07 07:40:30.989013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.634 07:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.634 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:31.634 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.634 "name": "Existed_Raid", 00:19:31.634 "uuid": "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3", 00:19:31.634 "strip_size_kb": 0, 00:19:31.634 "state": "configuring", 00:19:31.634 "raid_level": "raid1", 00:19:31.634 "superblock": true, 00:19:31.634 "num_base_bdevs": 2, 00:19:31.634 "num_base_bdevs_discovered": 1, 00:19:31.634 "num_base_bdevs_operational": 2, 00:19:31.634 "base_bdevs_list": [ 00:19:31.634 { 00:19:31.634 "name": "BaseBdev1", 00:19:31.634 "uuid": "dcd29263-ec49-457d-9e2f-b2e310a5b06d", 00:19:31.634 "is_configured": true, 00:19:31.634 "data_offset": 2048, 00:19:31.634 "data_size": 63488 00:19:31.634 }, 00:19:31.634 { 00:19:31.634 "name": "BaseBdev2", 00:19:31.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.634 "is_configured": false, 00:19:31.634 "data_offset": 0, 00:19:31.634 "data_size": 0 00:19:31.634 } 00:19:31.634 ] 00:19:31.634 }' 00:19:31.634 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.634 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.893 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:31.893 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:31.893 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.153 [2024-10-07 07:40:31.468467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.153 [2024-10-07 07:40:31.468729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:32.153 [2024-10-07 07:40:31.468758] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:32.153 [2024-10-07 07:40:31.469033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:32.153 [2024-10-07 07:40:31.469180] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:32.153 [2024-10-07 07:40:31.469194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:32.153 [2024-10-07 07:40:31.469350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.153 BaseBdev2 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.153 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.154 [ 00:19:32.154 { 00:19:32.154 "name": "BaseBdev2", 00:19:32.154 "aliases": [ 00:19:32.154 "c40fddc8-5d5a-498d-b39a-2eca5e8417ba" 00:19:32.154 ], 00:19:32.154 "product_name": "Malloc disk", 00:19:32.154 "block_size": 512, 00:19:32.154 "num_blocks": 65536, 00:19:32.154 "uuid": "c40fddc8-5d5a-498d-b39a-2eca5e8417ba", 00:19:32.154 "assigned_rate_limits": { 00:19:32.154 "rw_ios_per_sec": 0, 00:19:32.154 "rw_mbytes_per_sec": 0, 00:19:32.154 "r_mbytes_per_sec": 0, 00:19:32.154 "w_mbytes_per_sec": 0 00:19:32.154 }, 00:19:32.154 "claimed": true, 00:19:32.154 "claim_type": "exclusive_write", 00:19:32.154 "zoned": false, 00:19:32.154 "supported_io_types": { 00:19:32.154 "read": true, 00:19:32.154 "write": true, 00:19:32.154 "unmap": true, 00:19:32.154 "flush": true, 00:19:32.154 "reset": true, 00:19:32.154 "nvme_admin": false, 00:19:32.154 "nvme_io": false, 00:19:32.154 "nvme_io_md": false, 00:19:32.154 "write_zeroes": true, 00:19:32.154 "zcopy": true, 00:19:32.154 "get_zone_info": false, 00:19:32.154 "zone_management": false, 00:19:32.154 "zone_append": false, 00:19:32.154 "compare": false, 00:19:32.154 "compare_and_write": false, 00:19:32.154 "abort": true, 00:19:32.154 "seek_hole": false, 00:19:32.154 "seek_data": false, 00:19:32.154 "copy": true, 00:19:32.154 "nvme_iov_md": false 00:19:32.154 }, 00:19:32.154 "memory_domains": [ 00:19:32.154 { 00:19:32.154 "dma_device_id": "system", 00:19:32.154 "dma_device_type": 1 00:19:32.154 }, 00:19:32.154 { 00:19:32.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.154 "dma_device_type": 2 00:19:32.154 } 00:19:32.154 ], 00:19:32.154 "driver_specific": {} 00:19:32.154 } 00:19:32.154 ] 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.154 "name": "Existed_Raid", 00:19:32.154 "uuid": "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3", 00:19:32.154 "strip_size_kb": 0, 00:19:32.154 "state": "online", 00:19:32.154 "raid_level": "raid1", 00:19:32.154 "superblock": true, 00:19:32.154 "num_base_bdevs": 2, 00:19:32.154 "num_base_bdevs_discovered": 2, 00:19:32.154 "num_base_bdevs_operational": 2, 00:19:32.154 "base_bdevs_list": [ 00:19:32.154 { 00:19:32.154 "name": "BaseBdev1", 00:19:32.154 "uuid": "dcd29263-ec49-457d-9e2f-b2e310a5b06d", 00:19:32.154 "is_configured": true, 00:19:32.154 "data_offset": 2048, 00:19:32.154 "data_size": 63488 00:19:32.154 }, 00:19:32.154 { 00:19:32.154 "name": "BaseBdev2", 00:19:32.154 "uuid": "c40fddc8-5d5a-498d-b39a-2eca5e8417ba", 00:19:32.154 "is_configured": true, 00:19:32.154 "data_offset": 2048, 00:19:32.154 "data_size": 63488 00:19:32.154 } 00:19:32.154 ] 00:19:32.154 }' 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.154 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.413 [2024-10-07 07:40:31.941025] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.413 07:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.673 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.673 "name": "Existed_Raid", 00:19:32.673 "aliases": [ 00:19:32.673 "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3" 00:19:32.673 ], 00:19:32.673 "product_name": "Raid Volume", 00:19:32.673 "block_size": 512, 00:19:32.673 "num_blocks": 63488, 00:19:32.673 "uuid": "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3", 00:19:32.673 "assigned_rate_limits": { 00:19:32.673 "rw_ios_per_sec": 0, 00:19:32.673 "rw_mbytes_per_sec": 0, 00:19:32.673 "r_mbytes_per_sec": 0, 00:19:32.673 "w_mbytes_per_sec": 0 00:19:32.673 }, 00:19:32.673 "claimed": false, 00:19:32.673 "zoned": false, 00:19:32.673 "supported_io_types": { 00:19:32.673 "read": true, 00:19:32.673 "write": true, 00:19:32.673 "unmap": false, 00:19:32.673 "flush": false, 00:19:32.673 "reset": true, 00:19:32.673 "nvme_admin": false, 00:19:32.673 "nvme_io": false, 00:19:32.673 "nvme_io_md": false, 00:19:32.673 "write_zeroes": true, 00:19:32.673 "zcopy": false, 00:19:32.673 "get_zone_info": false, 00:19:32.673 "zone_management": false, 00:19:32.673 "zone_append": false, 00:19:32.673 "compare": false, 00:19:32.673 "compare_and_write": false, 00:19:32.673 "abort": false, 00:19:32.673 "seek_hole": false, 00:19:32.673 "seek_data": false, 00:19:32.673 "copy": false, 00:19:32.673 "nvme_iov_md": false 00:19:32.673 }, 00:19:32.673 "memory_domains": [ 00:19:32.673 { 00:19:32.673 "dma_device_id": "system", 00:19:32.673 "dma_device_type": 1 00:19:32.673 }, 00:19:32.673 { 00:19:32.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.673 "dma_device_type": 2 00:19:32.673 }, 00:19:32.673 { 00:19:32.673 "dma_device_id": "system", 00:19:32.673 "dma_device_type": 1 00:19:32.673 }, 00:19:32.673 { 00:19:32.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.673 "dma_device_type": 2 00:19:32.673 } 00:19:32.673 ], 00:19:32.673 "driver_specific": { 00:19:32.673 "raid": { 00:19:32.673 "uuid": "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3", 00:19:32.673 "strip_size_kb": 0, 00:19:32.673 "state": "online", 00:19:32.673 "raid_level": "raid1", 00:19:32.673 "superblock": true, 00:19:32.673 "num_base_bdevs": 2, 00:19:32.673 "num_base_bdevs_discovered": 2, 00:19:32.673 "num_base_bdevs_operational": 2, 00:19:32.673 "base_bdevs_list": [ 00:19:32.673 { 00:19:32.673 "name": "BaseBdev1", 00:19:32.673 "uuid": "dcd29263-ec49-457d-9e2f-b2e310a5b06d", 00:19:32.673 "is_configured": true, 00:19:32.673 "data_offset": 2048, 00:19:32.673 "data_size": 63488 00:19:32.673 }, 00:19:32.673 { 00:19:32.673 "name": "BaseBdev2", 00:19:32.673 "uuid": "c40fddc8-5d5a-498d-b39a-2eca5e8417ba", 00:19:32.673 "is_configured": true, 00:19:32.673 "data_offset": 2048, 00:19:32.673 "data_size": 63488 00:19:32.673 } 00:19:32.673 ] 00:19:32.673 } 00:19:32.673 } 00:19:32.673 }' 00:19:32.673 07:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.673 BaseBdev2' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.673 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 [2024-10-07 07:40:32.160822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:32.932 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.933 "name": "Existed_Raid", 00:19:32.933 "uuid": "033ae0dc-fe72-44d9-9cc9-c9de77c38aa3", 00:19:32.933 "strip_size_kb": 0, 00:19:32.933 "state": "online", 00:19:32.933 "raid_level": "raid1", 00:19:32.933 "superblock": true, 00:19:32.933 "num_base_bdevs": 2, 00:19:32.933 "num_base_bdevs_discovered": 1, 00:19:32.933 "num_base_bdevs_operational": 1, 00:19:32.933 "base_bdevs_list": [ 00:19:32.933 { 00:19:32.933 "name": null, 00:19:32.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.933 "is_configured": false, 00:19:32.933 "data_offset": 0, 00:19:32.933 "data_size": 63488 00:19:32.933 }, 00:19:32.933 { 00:19:32.933 "name": "BaseBdev2", 00:19:32.933 "uuid": "c40fddc8-5d5a-498d-b39a-2eca5e8417ba", 00:19:32.933 "is_configured": true, 00:19:32.933 "data_offset": 2048, 00:19:32.933 "data_size": 63488 00:19:32.933 } 00:19:32.933 ] 00:19:32.933 }' 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.933 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.191 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.191 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 [2024-10-07 07:40:32.811225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.449 [2024-10-07 07:40:32.811350] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.449 [2024-10-07 07:40:32.916319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.449 [2024-10-07 07:40:32.916377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.449 [2024-10-07 07:40:32.916392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62987 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 62987 ']' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 62987 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:33.449 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 62987 00:19:33.450 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:33.450 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:33.450 killing process with pid 62987 00:19:33.450 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 62987' 00:19:33.450 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 62987 00:19:33.450 [2024-10-07 07:40:32.998184] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.450 07:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 62987 00:19:33.708 [2024-10-07 07:40:33.016597] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.084 07:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:35.084 00:19:35.084 real 0m5.469s 00:19:35.085 user 0m7.780s 00:19:35.085 sys 0m0.948s 00:19:35.085 07:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:35.085 07:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 ************************************ 00:19:35.085 END TEST raid_state_function_test_sb 00:19:35.085 ************************************ 00:19:35.085 07:40:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:19:35.085 07:40:34 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:19:35.085 07:40:34 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:35.085 07:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 ************************************ 00:19:35.085 START TEST raid_superblock_test 00:19:35.085 ************************************ 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 2 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63239 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63239 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 63239 ']' 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:35.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:35.085 07:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 [2024-10-07 07:40:34.518822] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:35.085 [2024-10-07 07:40:34.518974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63239 ] 00:19:35.344 [2024-10-07 07:40:34.681220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.603 [2024-10-07 07:40:34.913720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.603 [2024-10-07 07:40:35.125138] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.603 [2024-10-07 07:40:35.125201] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:35.861 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.119 malloc1 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.119 [2024-10-07 07:40:35.471338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.119 [2024-10-07 07:40:35.471408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.119 [2024-10-07 07:40:35.471433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:36.119 [2024-10-07 07:40:35.471448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.119 [2024-10-07 07:40:35.473882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.119 [2024-10-07 07:40:35.473923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.119 pt1 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.119 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.119 malloc2 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.120 [2024-10-07 07:40:35.539878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.120 [2024-10-07 07:40:35.539960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.120 [2024-10-07 07:40:35.539988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:36.120 [2024-10-07 07:40:35.540000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.120 [2024-10-07 07:40:35.542716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.120 [2024-10-07 07:40:35.542788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.120 pt2 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.120 [2024-10-07 07:40:35.552021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.120 [2024-10-07 07:40:35.554409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.120 [2024-10-07 07:40:35.554636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:36.120 [2024-10-07 07:40:35.554652] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:36.120 [2024-10-07 07:40:35.554998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:36.120 [2024-10-07 07:40:35.555187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:36.120 [2024-10-07 07:40:35.555210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:36.120 [2024-10-07 07:40:35.555419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.120 "name": "raid_bdev1", 00:19:36.120 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:36.120 "strip_size_kb": 0, 00:19:36.120 "state": "online", 00:19:36.120 "raid_level": "raid1", 00:19:36.120 "superblock": true, 00:19:36.120 "num_base_bdevs": 2, 00:19:36.120 "num_base_bdevs_discovered": 2, 00:19:36.120 "num_base_bdevs_operational": 2, 00:19:36.120 "base_bdevs_list": [ 00:19:36.120 { 00:19:36.120 "name": "pt1", 00:19:36.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.120 "is_configured": true, 00:19:36.120 "data_offset": 2048, 00:19:36.120 "data_size": 63488 00:19:36.120 }, 00:19:36.120 { 00:19:36.120 "name": "pt2", 00:19:36.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.120 "is_configured": true, 00:19:36.120 "data_offset": 2048, 00:19:36.120 "data_size": 63488 00:19:36.120 } 00:19:36.120 ] 00:19:36.120 }' 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.120 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.688 07:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 [2024-10-07 07:40:36.004328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.688 "name": "raid_bdev1", 00:19:36.688 "aliases": [ 00:19:36.688 "3063841f-7e16-4d43-b31b-d626e62269e9" 00:19:36.688 ], 00:19:36.688 "product_name": "Raid Volume", 00:19:36.688 "block_size": 512, 00:19:36.688 "num_blocks": 63488, 00:19:36.688 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:36.688 "assigned_rate_limits": { 00:19:36.688 "rw_ios_per_sec": 0, 00:19:36.688 "rw_mbytes_per_sec": 0, 00:19:36.688 "r_mbytes_per_sec": 0, 00:19:36.688 "w_mbytes_per_sec": 0 00:19:36.688 }, 00:19:36.688 "claimed": false, 00:19:36.688 "zoned": false, 00:19:36.688 "supported_io_types": { 00:19:36.688 "read": true, 00:19:36.688 "write": true, 00:19:36.688 "unmap": false, 00:19:36.688 "flush": false, 00:19:36.688 "reset": true, 00:19:36.688 "nvme_admin": false, 00:19:36.688 "nvme_io": false, 00:19:36.688 "nvme_io_md": false, 00:19:36.688 "write_zeroes": true, 00:19:36.688 "zcopy": false, 00:19:36.688 "get_zone_info": false, 00:19:36.688 "zone_management": false, 00:19:36.688 "zone_append": false, 00:19:36.688 "compare": false, 00:19:36.688 "compare_and_write": false, 00:19:36.688 "abort": false, 00:19:36.688 "seek_hole": false, 00:19:36.688 "seek_data": false, 00:19:36.688 "copy": false, 00:19:36.688 "nvme_iov_md": false 00:19:36.688 }, 00:19:36.688 "memory_domains": [ 00:19:36.688 { 00:19:36.688 "dma_device_id": "system", 00:19:36.688 "dma_device_type": 1 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.688 "dma_device_type": 2 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "dma_device_id": "system", 00:19:36.688 "dma_device_type": 1 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.688 "dma_device_type": 2 00:19:36.688 } 00:19:36.688 ], 00:19:36.688 "driver_specific": { 00:19:36.688 "raid": { 00:19:36.688 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:36.688 "strip_size_kb": 0, 00:19:36.688 "state": "online", 00:19:36.688 "raid_level": "raid1", 00:19:36.688 "superblock": true, 00:19:36.688 "num_base_bdevs": 2, 00:19:36.688 "num_base_bdevs_discovered": 2, 00:19:36.688 "num_base_bdevs_operational": 2, 00:19:36.688 "base_bdevs_list": [ 00:19:36.688 { 00:19:36.688 "name": "pt1", 00:19:36.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.688 "is_configured": true, 00:19:36.688 "data_offset": 2048, 00:19:36.688 "data_size": 63488 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "name": "pt2", 00:19:36.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.688 "is_configured": true, 00:19:36.688 "data_offset": 2048, 00:19:36.688 "data_size": 63488 00:19:36.688 } 00:19:36.688 ] 00:19:36.688 } 00:19:36.688 } 00:19:36.688 }' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:36.688 pt2' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:36.689 [2024-10-07 07:40:36.232325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3063841f-7e16-4d43-b31b-d626e62269e9 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3063841f-7e16-4d43-b31b-d626e62269e9 ']' 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 [2024-10-07 07:40:36.276063] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.948 [2024-10-07 07:40:36.276096] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.948 [2024-10-07 07:40:36.276187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.948 [2024-10-07 07:40:36.276251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.948 [2024-10-07 07:40:36.276265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.948 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 [2024-10-07 07:40:36.408093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:36.948 [2024-10-07 07:40:36.410350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:36.949 [2024-10-07 07:40:36.410428] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:36.949 [2024-10-07 07:40:36.410487] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:36.949 [2024-10-07 07:40:36.410505] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.949 [2024-10-07 07:40:36.410518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:36.949 request: 00:19:36.949 { 00:19:36.949 "name": "raid_bdev1", 00:19:36.949 "raid_level": "raid1", 00:19:36.949 "base_bdevs": [ 00:19:36.949 "malloc1", 00:19:36.949 "malloc2" 00:19:36.949 ], 00:19:36.949 "superblock": false, 00:19:36.949 "method": "bdev_raid_create", 00:19:36.949 "req_id": 1 00:19:36.949 } 00:19:36.949 Got JSON-RPC error response 00:19:36.949 response: 00:19:36.949 { 00:19:36.949 "code": -17, 00:19:36.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:36.949 } 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 [2024-10-07 07:40:36.476100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.949 [2024-10-07 07:40:36.476176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.949 [2024-10-07 07:40:36.476198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:36.949 [2024-10-07 07:40:36.476212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.949 [2024-10-07 07:40:36.478856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.949 [2024-10-07 07:40:36.478901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.949 [2024-10-07 07:40:36.478992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:36.949 [2024-10-07 07:40:36.479056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.949 pt1 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:37.208 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.208 "name": "raid_bdev1", 00:19:37.208 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:37.208 "strip_size_kb": 0, 00:19:37.208 "state": "configuring", 00:19:37.208 "raid_level": "raid1", 00:19:37.208 "superblock": true, 00:19:37.208 "num_base_bdevs": 2, 00:19:37.208 "num_base_bdevs_discovered": 1, 00:19:37.208 "num_base_bdevs_operational": 2, 00:19:37.208 "base_bdevs_list": [ 00:19:37.208 { 00:19:37.208 "name": "pt1", 00:19:37.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.208 "is_configured": true, 00:19:37.208 "data_offset": 2048, 00:19:37.208 "data_size": 63488 00:19:37.208 }, 00:19:37.208 { 00:19:37.208 "name": null, 00:19:37.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.208 "is_configured": false, 00:19:37.208 "data_offset": 2048, 00:19:37.208 "data_size": 63488 00:19:37.208 } 00:19:37.208 ] 00:19:37.208 }' 00:19:37.208 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.208 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 [2024-10-07 07:40:36.936171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.467 [2024-10-07 07:40:36.936244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.467 [2024-10-07 07:40:36.936267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:37.467 [2024-10-07 07:40:36.936282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.467 [2024-10-07 07:40:36.936828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.467 [2024-10-07 07:40:36.936860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.467 [2024-10-07 07:40:36.936947] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.467 [2024-10-07 07:40:36.936977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.467 [2024-10-07 07:40:36.937110] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:37.467 [2024-10-07 07:40:36.937129] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.467 [2024-10-07 07:40:36.937396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:37.467 [2024-10-07 07:40:36.937574] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:37.467 [2024-10-07 07:40:36.937590] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:37.467 [2024-10-07 07:40:36.937761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.467 pt2 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.467 "name": "raid_bdev1", 00:19:37.467 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:37.467 "strip_size_kb": 0, 00:19:37.467 "state": "online", 00:19:37.467 "raid_level": "raid1", 00:19:37.467 "superblock": true, 00:19:37.467 "num_base_bdevs": 2, 00:19:37.467 "num_base_bdevs_discovered": 2, 00:19:37.467 "num_base_bdevs_operational": 2, 00:19:37.467 "base_bdevs_list": [ 00:19:37.467 { 00:19:37.467 "name": "pt1", 00:19:37.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.467 "is_configured": true, 00:19:37.467 "data_offset": 2048, 00:19:37.467 "data_size": 63488 00:19:37.467 }, 00:19:37.467 { 00:19:37.467 "name": "pt2", 00:19:37.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.467 "is_configured": true, 00:19:37.467 "data_offset": 2048, 00:19:37.467 "data_size": 63488 00:19:37.467 } 00:19:37.467 ] 00:19:37.467 }' 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.467 07:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.035 [2024-10-07 07:40:37.412521] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.035 "name": "raid_bdev1", 00:19:38.035 "aliases": [ 00:19:38.035 "3063841f-7e16-4d43-b31b-d626e62269e9" 00:19:38.035 ], 00:19:38.035 "product_name": "Raid Volume", 00:19:38.035 "block_size": 512, 00:19:38.035 "num_blocks": 63488, 00:19:38.035 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:38.035 "assigned_rate_limits": { 00:19:38.035 "rw_ios_per_sec": 0, 00:19:38.035 "rw_mbytes_per_sec": 0, 00:19:38.035 "r_mbytes_per_sec": 0, 00:19:38.035 "w_mbytes_per_sec": 0 00:19:38.035 }, 00:19:38.035 "claimed": false, 00:19:38.035 "zoned": false, 00:19:38.035 "supported_io_types": { 00:19:38.035 "read": true, 00:19:38.035 "write": true, 00:19:38.035 "unmap": false, 00:19:38.035 "flush": false, 00:19:38.035 "reset": true, 00:19:38.035 "nvme_admin": false, 00:19:38.035 "nvme_io": false, 00:19:38.035 "nvme_io_md": false, 00:19:38.035 "write_zeroes": true, 00:19:38.035 "zcopy": false, 00:19:38.035 "get_zone_info": false, 00:19:38.035 "zone_management": false, 00:19:38.035 "zone_append": false, 00:19:38.035 "compare": false, 00:19:38.035 "compare_and_write": false, 00:19:38.035 "abort": false, 00:19:38.035 "seek_hole": false, 00:19:38.035 "seek_data": false, 00:19:38.035 "copy": false, 00:19:38.035 "nvme_iov_md": false 00:19:38.035 }, 00:19:38.035 "memory_domains": [ 00:19:38.035 { 00:19:38.035 "dma_device_id": "system", 00:19:38.035 "dma_device_type": 1 00:19:38.035 }, 00:19:38.035 { 00:19:38.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.035 "dma_device_type": 2 00:19:38.035 }, 00:19:38.035 { 00:19:38.035 "dma_device_id": "system", 00:19:38.035 "dma_device_type": 1 00:19:38.035 }, 00:19:38.035 { 00:19:38.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.035 "dma_device_type": 2 00:19:38.035 } 00:19:38.035 ], 00:19:38.035 "driver_specific": { 00:19:38.035 "raid": { 00:19:38.035 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:38.035 "strip_size_kb": 0, 00:19:38.035 "state": "online", 00:19:38.035 "raid_level": "raid1", 00:19:38.035 "superblock": true, 00:19:38.035 "num_base_bdevs": 2, 00:19:38.035 "num_base_bdevs_discovered": 2, 00:19:38.035 "num_base_bdevs_operational": 2, 00:19:38.035 "base_bdevs_list": [ 00:19:38.035 { 00:19:38.035 "name": "pt1", 00:19:38.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.035 "is_configured": true, 00:19:38.035 "data_offset": 2048, 00:19:38.035 "data_size": 63488 00:19:38.035 }, 00:19:38.035 { 00:19:38.035 "name": "pt2", 00:19:38.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.035 "is_configured": true, 00:19:38.035 "data_offset": 2048, 00:19:38.035 "data_size": 63488 00:19:38.035 } 00:19:38.035 ] 00:19:38.035 } 00:19:38.035 } 00:19:38.035 }' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:38.035 pt2' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.035 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 [2024-10-07 07:40:37.648598] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3063841f-7e16-4d43-b31b-d626e62269e9 '!=' 3063841f-7e16-4d43-b31b-d626e62269e9 ']' 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.294 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.295 [2024-10-07 07:40:37.684375] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.295 "name": "raid_bdev1", 00:19:38.295 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:38.295 "strip_size_kb": 0, 00:19:38.295 "state": "online", 00:19:38.295 "raid_level": "raid1", 00:19:38.295 "superblock": true, 00:19:38.295 "num_base_bdevs": 2, 00:19:38.295 "num_base_bdevs_discovered": 1, 00:19:38.295 "num_base_bdevs_operational": 1, 00:19:38.295 "base_bdevs_list": [ 00:19:38.295 { 00:19:38.295 "name": null, 00:19:38.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.295 "is_configured": false, 00:19:38.295 "data_offset": 0, 00:19:38.295 "data_size": 63488 00:19:38.295 }, 00:19:38.295 { 00:19:38.295 "name": "pt2", 00:19:38.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.295 "is_configured": true, 00:19:38.295 "data_offset": 2048, 00:19:38.295 "data_size": 63488 00:19:38.295 } 00:19:38.295 ] 00:19:38.295 }' 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.295 07:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.862 [2024-10-07 07:40:38.128449] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.862 [2024-10-07 07:40:38.128482] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.862 [2024-10-07 07:40:38.128567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.862 [2024-10-07 07:40:38.128617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.862 [2024-10-07 07:40:38.128631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.862 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.863 [2024-10-07 07:40:38.192479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.863 [2024-10-07 07:40:38.192550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.863 [2024-10-07 07:40:38.192572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:38.863 [2024-10-07 07:40:38.192587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.863 [2024-10-07 07:40:38.195414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.863 [2024-10-07 07:40:38.195464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.863 [2024-10-07 07:40:38.195553] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:38.863 [2024-10-07 07:40:38.195610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.863 [2024-10-07 07:40:38.195736] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:38.863 [2024-10-07 07:40:38.195760] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:38.863 [2024-10-07 07:40:38.196019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:38.863 [2024-10-07 07:40:38.196191] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:38.863 [2024-10-07 07:40:38.196210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:38.863 [2024-10-07 07:40:38.196400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.863 pt2 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.863 "name": "raid_bdev1", 00:19:38.863 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:38.863 "strip_size_kb": 0, 00:19:38.863 "state": "online", 00:19:38.863 "raid_level": "raid1", 00:19:38.863 "superblock": true, 00:19:38.863 "num_base_bdevs": 2, 00:19:38.863 "num_base_bdevs_discovered": 1, 00:19:38.863 "num_base_bdevs_operational": 1, 00:19:38.863 "base_bdevs_list": [ 00:19:38.863 { 00:19:38.863 "name": null, 00:19:38.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.863 "is_configured": false, 00:19:38.863 "data_offset": 2048, 00:19:38.863 "data_size": 63488 00:19:38.863 }, 00:19:38.863 { 00:19:38.863 "name": "pt2", 00:19:38.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.863 "is_configured": true, 00:19:38.863 "data_offset": 2048, 00:19:38.863 "data_size": 63488 00:19:38.863 } 00:19:38.863 ] 00:19:38.863 }' 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.863 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.122 [2024-10-07 07:40:38.652560] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.122 [2024-10-07 07:40:38.652599] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.122 [2024-10-07 07:40:38.652681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.122 [2024-10-07 07:40:38.652759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.122 [2024-10-07 07:40:38.652789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.122 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.380 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.380 [2024-10-07 07:40:38.708630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.380 [2024-10-07 07:40:38.708725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.380 [2024-10-07 07:40:38.708758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:39.380 [2024-10-07 07:40:38.708770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.380 [2024-10-07 07:40:38.711438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.380 [2024-10-07 07:40:38.711496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.380 [2024-10-07 07:40:38.711608] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:39.381 [2024-10-07 07:40:38.711656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.381 [2024-10-07 07:40:38.711830] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:39.381 [2024-10-07 07:40:38.711844] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.381 [2024-10-07 07:40:38.711868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:39.381 [2024-10-07 07:40:38.711931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.381 [2024-10-07 07:40:38.712018] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:39.381 [2024-10-07 07:40:38.712029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:39.381 [2024-10-07 07:40:38.712337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:39.381 [2024-10-07 07:40:38.712524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:39.381 [2024-10-07 07:40:38.712551] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:39.381 [2024-10-07 07:40:38.712820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.381 pt1 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.381 "name": "raid_bdev1", 00:19:39.381 "uuid": "3063841f-7e16-4d43-b31b-d626e62269e9", 00:19:39.381 "strip_size_kb": 0, 00:19:39.381 "state": "online", 00:19:39.381 "raid_level": "raid1", 00:19:39.381 "superblock": true, 00:19:39.381 "num_base_bdevs": 2, 00:19:39.381 "num_base_bdevs_discovered": 1, 00:19:39.381 "num_base_bdevs_operational": 1, 00:19:39.381 "base_bdevs_list": [ 00:19:39.381 { 00:19:39.381 "name": null, 00:19:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.381 "is_configured": false, 00:19:39.381 "data_offset": 2048, 00:19:39.381 "data_size": 63488 00:19:39.381 }, 00:19:39.381 { 00:19:39.381 "name": "pt2", 00:19:39.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.381 "is_configured": true, 00:19:39.381 "data_offset": 2048, 00:19:39.381 "data_size": 63488 00:19:39.381 } 00:19:39.381 ] 00:19:39.381 }' 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.381 07:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.639 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:39.639 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.639 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.639 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.898 [2024-10-07 07:40:39.245105] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3063841f-7e16-4d43-b31b-d626e62269e9 '!=' 3063841f-7e16-4d43-b31b-d626e62269e9 ']' 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63239 00:19:39.898 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 63239 ']' 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 63239 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 63239 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:39.899 killing process with pid 63239 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 63239' 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 63239 00:19:39.899 [2024-10-07 07:40:39.319189] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:39.899 [2024-10-07 07:40:39.319298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.899 [2024-10-07 07:40:39.319354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.899 [2024-10-07 07:40:39.319373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:39.899 07:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 63239 00:19:40.158 [2024-10-07 07:40:39.540040] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.533 07:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:41.533 00:19:41.533 real 0m6.460s 00:19:41.533 user 0m9.687s 00:19:41.533 sys 0m1.131s 00:19:41.533 07:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:41.533 07:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.533 ************************************ 00:19:41.533 END TEST raid_superblock_test 00:19:41.533 ************************************ 00:19:41.533 07:40:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:19:41.533 07:40:40 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:41.533 07:40:40 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:41.533 07:40:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:41.533 ************************************ 00:19:41.533 START TEST raid_read_error_test 00:19:41.533 ************************************ 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 2 read 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:41.533 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1Cl3pa2ymd 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63575 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63575 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:41.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 63575 ']' 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:41.534 07:40:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.534 [2024-10-07 07:40:41.080720] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:41.534 [2024-10-07 07:40:41.081206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:19:41.792 [2024-10-07 07:40:41.273132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.050 [2024-10-07 07:40:41.571652] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.308 [2024-10-07 07:40:41.790725] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.308 [2024-10-07 07:40:41.790766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.565 BaseBdev1_malloc 00:19:42.565 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.566 true 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.566 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.566 [2024-10-07 07:40:42.123388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:42.566 [2024-10-07 07:40:42.123464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.566 [2024-10-07 07:40:42.123488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:42.566 [2024-10-07 07:40:42.123503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.823 [2024-10-07 07:40:42.126140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.823 [2024-10-07 07:40:42.126188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:42.823 BaseBdev1 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 BaseBdev2_malloc 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 true 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 [2024-10-07 07:40:42.197246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:42.823 [2024-10-07 07:40:42.197337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.823 [2024-10-07 07:40:42.197364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:42.823 [2024-10-07 07:40:42.197381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.823 [2024-10-07 07:40:42.200211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.823 [2024-10-07 07:40:42.200433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:42.823 BaseBdev2 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 [2024-10-07 07:40:42.205406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.823 [2024-10-07 07:40:42.208043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.823 [2024-10-07 07:40:42.208335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:42.823 [2024-10-07 07:40:42.208400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:42.823 [2024-10-07 07:40:42.208907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:42.823 [2024-10-07 07:40:42.209296] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:42.823 [2024-10-07 07:40:42.209417] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:42.823 [2024-10-07 07:40:42.209796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:42.823 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.823 "name": "raid_bdev1", 00:19:42.823 "uuid": "2371f8bc-2fd7-4ae9-80f8-f07a6e7b756b", 00:19:42.823 "strip_size_kb": 0, 00:19:42.823 "state": "online", 00:19:42.823 "raid_level": "raid1", 00:19:42.823 "superblock": true, 00:19:42.823 "num_base_bdevs": 2, 00:19:42.823 "num_base_bdevs_discovered": 2, 00:19:42.823 "num_base_bdevs_operational": 2, 00:19:42.823 "base_bdevs_list": [ 00:19:42.823 { 00:19:42.823 "name": "BaseBdev1", 00:19:42.823 "uuid": "842dbe22-dcfd-5b77-a453-ca8eca1b64ec", 00:19:42.824 "is_configured": true, 00:19:42.824 "data_offset": 2048, 00:19:42.824 "data_size": 63488 00:19:42.824 }, 00:19:42.824 { 00:19:42.824 "name": "BaseBdev2", 00:19:42.824 "uuid": "56e37730-e459-5aac-a40e-3beef63138ad", 00:19:42.824 "is_configured": true, 00:19:42.824 "data_offset": 2048, 00:19:42.824 "data_size": 63488 00:19:42.824 } 00:19:42.824 ] 00:19:42.824 }' 00:19:42.824 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.824 07:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.389 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:43.389 07:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:43.389 [2024-10-07 07:40:42.791179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.327 "name": "raid_bdev1", 00:19:44.327 "uuid": "2371f8bc-2fd7-4ae9-80f8-f07a6e7b756b", 00:19:44.327 "strip_size_kb": 0, 00:19:44.327 "state": "online", 00:19:44.327 "raid_level": "raid1", 00:19:44.327 "superblock": true, 00:19:44.327 "num_base_bdevs": 2, 00:19:44.327 "num_base_bdevs_discovered": 2, 00:19:44.327 "num_base_bdevs_operational": 2, 00:19:44.327 "base_bdevs_list": [ 00:19:44.327 { 00:19:44.327 "name": "BaseBdev1", 00:19:44.327 "uuid": "842dbe22-dcfd-5b77-a453-ca8eca1b64ec", 00:19:44.327 "is_configured": true, 00:19:44.327 "data_offset": 2048, 00:19:44.327 "data_size": 63488 00:19:44.327 }, 00:19:44.327 { 00:19:44.327 "name": "BaseBdev2", 00:19:44.327 "uuid": "56e37730-e459-5aac-a40e-3beef63138ad", 00:19:44.327 "is_configured": true, 00:19:44.327 "data_offset": 2048, 00:19:44.327 "data_size": 63488 00:19:44.327 } 00:19:44.327 ] 00:19:44.327 }' 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.327 07:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.585 07:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.585 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:44.585 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.845 [2024-10-07 07:40:44.146467] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.845 [2024-10-07 07:40:44.146517] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.845 [2024-10-07 07:40:44.149194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.845 [2024-10-07 07:40:44.149244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.845 [2024-10-07 07:40:44.149326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.845 [2024-10-07 07:40:44.149341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:44.845 { 00:19:44.845 "results": [ 00:19:44.845 { 00:19:44.845 "job": "raid_bdev1", 00:19:44.845 "core_mask": "0x1", 00:19:44.845 "workload": "randrw", 00:19:44.845 "percentage": 50, 00:19:44.845 "status": "finished", 00:19:44.845 "queue_depth": 1, 00:19:44.845 "io_size": 131072, 00:19:44.845 "runtime": 1.352996, 00:19:44.845 "iops": 16518.895843003233, 00:19:44.845 "mibps": 2064.861980375404, 00:19:44.845 "io_failed": 0, 00:19:44.845 "io_timeout": 0, 00:19:44.845 "avg_latency_us": 57.71823398316821, 00:19:44.845 "min_latency_us": 23.771428571428572, 00:19:44.845 "max_latency_us": 3089.554285714286 00:19:44.845 } 00:19:44.846 ], 00:19:44.846 "core_count": 1 00:19:44.846 } 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63575 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 63575 ']' 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 63575 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 63575 00:19:44.846 killing process with pid 63575 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 63575' 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 63575 00:19:44.846 [2024-10-07 07:40:44.196513] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.846 07:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 63575 00:19:44.846 [2024-10-07 07:40:44.349104] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1Cl3pa2ymd 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:46.262 00:19:46.262 real 0m4.796s 00:19:46.262 user 0m5.770s 00:19:46.262 sys 0m0.655s 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:46.262 07:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.262 ************************************ 00:19:46.262 END TEST raid_read_error_test 00:19:46.262 ************************************ 00:19:46.262 07:40:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:19:46.262 07:40:45 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:46.262 07:40:45 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:46.262 07:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.262 ************************************ 00:19:46.262 START TEST raid_write_error_test 00:19:46.262 ************************************ 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 2 write 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y5y3imrda8 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63720 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63720 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 63720 ']' 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:46.262 07:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.521 [2024-10-07 07:40:45.942441] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:46.521 [2024-10-07 07:40:45.942626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63720 ] 00:19:46.778 [2024-10-07 07:40:46.128241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.036 [2024-10-07 07:40:46.354102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.036 [2024-10-07 07:40:46.575070] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.036 [2024-10-07 07:40:46.575144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 BaseBdev1_malloc 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 true 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 [2024-10-07 07:40:47.017936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:47.601 [2024-10-07 07:40:47.018001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.601 [2024-10-07 07:40:47.018025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:47.601 [2024-10-07 07:40:47.018041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.601 [2024-10-07 07:40:47.020886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.601 [2024-10-07 07:40:47.020932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:47.601 BaseBdev1 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 BaseBdev2_malloc 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 true 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.601 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.601 [2024-10-07 07:40:47.096460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:47.601 [2024-10-07 07:40:47.096543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.601 [2024-10-07 07:40:47.096568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:47.601 [2024-10-07 07:40:47.096582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.602 [2024-10-07 07:40:47.099225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.602 [2024-10-07 07:40:47.099275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:47.602 BaseBdev2 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 [2024-10-07 07:40:47.104615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.602 [2024-10-07 07:40:47.107767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.602 [2024-10-07 07:40:47.108048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:47.602 [2024-10-07 07:40:47.108069] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:47.602 [2024-10-07 07:40:47.108368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:47.602 [2024-10-07 07:40:47.108558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:47.602 [2024-10-07 07:40:47.108569] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:47.602 [2024-10-07 07:40:47.108838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.602 "name": "raid_bdev1", 00:19:47.602 "uuid": "38781906-bb06-45a0-8eea-f33b0591a959", 00:19:47.602 "strip_size_kb": 0, 00:19:47.602 "state": "online", 00:19:47.602 "raid_level": "raid1", 00:19:47.602 "superblock": true, 00:19:47.602 "num_base_bdevs": 2, 00:19:47.602 "num_base_bdevs_discovered": 2, 00:19:47.602 "num_base_bdevs_operational": 2, 00:19:47.602 "base_bdevs_list": [ 00:19:47.602 { 00:19:47.602 "name": "BaseBdev1", 00:19:47.602 "uuid": "82e88b1d-ecf4-5e47-b91e-d5366cb3b972", 00:19:47.602 "is_configured": true, 00:19:47.602 "data_offset": 2048, 00:19:47.602 "data_size": 63488 00:19:47.602 }, 00:19:47.602 { 00:19:47.602 "name": "BaseBdev2", 00:19:47.602 "uuid": "c7bbb876-f791-5c9c-99af-991f57fe686a", 00:19:47.602 "is_configured": true, 00:19:47.602 "data_offset": 2048, 00:19:47.602 "data_size": 63488 00:19:47.602 } 00:19:47.602 ] 00:19:47.602 }' 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.602 07:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.169 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:48.169 07:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:48.169 [2024-10-07 07:40:47.666324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.103 [2024-10-07 07:40:48.544900] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:49.103 [2024-10-07 07:40:48.544988] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.103 [2024-10-07 07:40:48.545209] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.103 "name": "raid_bdev1", 00:19:49.103 "uuid": "38781906-bb06-45a0-8eea-f33b0591a959", 00:19:49.103 "strip_size_kb": 0, 00:19:49.103 "state": "online", 00:19:49.103 "raid_level": "raid1", 00:19:49.103 "superblock": true, 00:19:49.103 "num_base_bdevs": 2, 00:19:49.103 "num_base_bdevs_discovered": 1, 00:19:49.103 "num_base_bdevs_operational": 1, 00:19:49.103 "base_bdevs_list": [ 00:19:49.103 { 00:19:49.103 "name": null, 00:19:49.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.103 "is_configured": false, 00:19:49.103 "data_offset": 0, 00:19:49.103 "data_size": 63488 00:19:49.103 }, 00:19:49.103 { 00:19:49.103 "name": "BaseBdev2", 00:19:49.103 "uuid": "c7bbb876-f791-5c9c-99af-991f57fe686a", 00:19:49.103 "is_configured": true, 00:19:49.103 "data_offset": 2048, 00:19:49.103 "data_size": 63488 00:19:49.103 } 00:19:49.103 ] 00:19:49.103 }' 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.103 07:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.669 [2024-10-07 07:40:49.020916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.669 [2024-10-07 07:40:49.021118] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.669 [2024-10-07 07:40:49.024031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.669 [2024-10-07 07:40:49.024072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.669 [2024-10-07 07:40:49.024131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.669 [2024-10-07 07:40:49.024143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:49.669 { 00:19:49.669 "results": [ 00:19:49.669 { 00:19:49.669 "job": "raid_bdev1", 00:19:49.669 "core_mask": "0x1", 00:19:49.669 "workload": "randrw", 00:19:49.669 "percentage": 50, 00:19:49.669 "status": "finished", 00:19:49.669 "queue_depth": 1, 00:19:49.669 "io_size": 131072, 00:19:49.669 "runtime": 1.352221, 00:19:49.669 "iops": 18038.471522036707, 00:19:49.669 "mibps": 2254.8089402545884, 00:19:49.669 "io_failed": 0, 00:19:49.669 "io_timeout": 0, 00:19:49.669 "avg_latency_us": 52.35069609083384, 00:19:49.669 "min_latency_us": 23.527619047619048, 00:19:49.669 "max_latency_us": 1849.0514285714285 00:19:49.669 } 00:19:49.669 ], 00:19:49.669 "core_count": 1 00:19:49.669 } 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63720 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 63720 ']' 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 63720 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 63720 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 63720' 00:19:49.669 killing process with pid 63720 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 63720 00:19:49.669 [2024-10-07 07:40:49.071291] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.669 07:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 63720 00:19:49.669 [2024-10-07 07:40:49.215414] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y5y3imrda8 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:51.568 ************************************ 00:19:51.568 END TEST raid_write_error_test 00:19:51.568 ************************************ 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:51.568 00:19:51.568 real 0m4.920s 00:19:51.568 user 0m5.952s 00:19:51.568 sys 0m0.630s 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:19:51.568 07:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.568 07:40:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:19:51.568 07:40:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:51.568 07:40:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:19:51.568 07:40:50 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:19:51.568 07:40:50 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:19:51.568 07:40:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.568 ************************************ 00:19:51.568 START TEST raid_state_function_test 00:19:51.568 ************************************ 00:19:51.568 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 3 false 00:19:51.568 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:51.568 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:51.568 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63864 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:51.569 Process raid pid: 63864 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63864' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63864 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 63864 ']' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:19:51.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:19:51.569 07:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.569 [2024-10-07 07:40:50.920052] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:19:51.569 [2024-10-07 07:40:50.920267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.569 [2024-10-07 07:40:51.106224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.827 [2024-10-07 07:40:51.339450] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.085 [2024-10-07 07:40:51.562649] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.085 [2024-10-07 07:40:51.562698] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.663 [2024-10-07 07:40:51.911240] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.663 [2024-10-07 07:40:51.911470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.663 [2024-10-07 07:40:51.911574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.663 [2024-10-07 07:40:51.911630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.663 [2024-10-07 07:40:51.911665] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.663 [2024-10-07 07:40:51.911778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.663 "name": "Existed_Raid", 00:19:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.663 "strip_size_kb": 64, 00:19:52.663 "state": "configuring", 00:19:52.663 "raid_level": "raid0", 00:19:52.663 "superblock": false, 00:19:52.663 "num_base_bdevs": 3, 00:19:52.663 "num_base_bdevs_discovered": 0, 00:19:52.663 "num_base_bdevs_operational": 3, 00:19:52.663 "base_bdevs_list": [ 00:19:52.663 { 00:19:52.663 "name": "BaseBdev1", 00:19:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.663 "is_configured": false, 00:19:52.663 "data_offset": 0, 00:19:52.663 "data_size": 0 00:19:52.663 }, 00:19:52.663 { 00:19:52.663 "name": "BaseBdev2", 00:19:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.663 "is_configured": false, 00:19:52.663 "data_offset": 0, 00:19:52.663 "data_size": 0 00:19:52.663 }, 00:19:52.663 { 00:19:52.663 "name": "BaseBdev3", 00:19:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.663 "is_configured": false, 00:19:52.663 "data_offset": 0, 00:19:52.663 "data_size": 0 00:19:52.663 } 00:19:52.663 ] 00:19:52.663 }' 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.663 07:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 [2024-10-07 07:40:52.315309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.966 [2024-10-07 07:40:52.315366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 [2024-10-07 07:40:52.323325] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.966 [2024-10-07 07:40:52.323566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.966 [2024-10-07 07:40:52.323596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.966 [2024-10-07 07:40:52.323618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.966 [2024-10-07 07:40:52.323631] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.966 [2024-10-07 07:40:52.323650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 [2024-10-07 07:40:52.399938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.966 BaseBdev1 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 [ 00:19:52.966 { 00:19:52.966 "name": "BaseBdev1", 00:19:52.966 "aliases": [ 00:19:52.966 "9109d324-f4a0-4249-815b-5a49e755d137" 00:19:52.966 ], 00:19:52.966 "product_name": "Malloc disk", 00:19:52.966 "block_size": 512, 00:19:52.966 "num_blocks": 65536, 00:19:52.966 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:52.966 "assigned_rate_limits": { 00:19:52.966 "rw_ios_per_sec": 0, 00:19:52.966 "rw_mbytes_per_sec": 0, 00:19:52.966 "r_mbytes_per_sec": 0, 00:19:52.966 "w_mbytes_per_sec": 0 00:19:52.966 }, 00:19:52.966 "claimed": true, 00:19:52.966 "claim_type": "exclusive_write", 00:19:52.966 "zoned": false, 00:19:52.966 "supported_io_types": { 00:19:52.966 "read": true, 00:19:52.966 "write": true, 00:19:52.966 "unmap": true, 00:19:52.966 "flush": true, 00:19:52.966 "reset": true, 00:19:52.966 "nvme_admin": false, 00:19:52.966 "nvme_io": false, 00:19:52.966 "nvme_io_md": false, 00:19:52.966 "write_zeroes": true, 00:19:52.966 "zcopy": true, 00:19:52.966 "get_zone_info": false, 00:19:52.966 "zone_management": false, 00:19:52.966 "zone_append": false, 00:19:52.966 "compare": false, 00:19:52.966 "compare_and_write": false, 00:19:52.966 "abort": true, 00:19:52.966 "seek_hole": false, 00:19:52.966 "seek_data": false, 00:19:52.966 "copy": true, 00:19:52.966 "nvme_iov_md": false 00:19:52.966 }, 00:19:52.966 "memory_domains": [ 00:19:52.966 { 00:19:52.966 "dma_device_id": "system", 00:19:52.966 "dma_device_type": 1 00:19:52.966 }, 00:19:52.966 { 00:19:52.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.966 "dma_device_type": 2 00:19:52.966 } 00:19:52.966 ], 00:19:52.966 "driver_specific": {} 00:19:52.966 } 00:19:52.966 ] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.966 "name": "Existed_Raid", 00:19:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.966 "strip_size_kb": 64, 00:19:52.966 "state": "configuring", 00:19:52.966 "raid_level": "raid0", 00:19:52.966 "superblock": false, 00:19:52.966 "num_base_bdevs": 3, 00:19:52.966 "num_base_bdevs_discovered": 1, 00:19:52.966 "num_base_bdevs_operational": 3, 00:19:52.966 "base_bdevs_list": [ 00:19:52.966 { 00:19:52.966 "name": "BaseBdev1", 00:19:52.966 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:52.966 "is_configured": true, 00:19:52.966 "data_offset": 0, 00:19:52.966 "data_size": 65536 00:19:52.966 }, 00:19:52.966 { 00:19:52.966 "name": "BaseBdev2", 00:19:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.966 "is_configured": false, 00:19:52.966 "data_offset": 0, 00:19:52.966 "data_size": 0 00:19:52.966 }, 00:19:52.966 { 00:19:52.966 "name": "BaseBdev3", 00:19:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.966 "is_configured": false, 00:19:52.966 "data_offset": 0, 00:19:52.966 "data_size": 0 00:19:52.966 } 00:19:52.966 ] 00:19:52.966 }' 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.966 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-10-07 07:40:52.936114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.532 [2024-10-07 07:40:52.936171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 [2024-10-07 07:40:52.948152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.532 [2024-10-07 07:40:52.950596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.532 [2024-10-07 07:40:52.950792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.532 [2024-10-07 07:40:52.950891] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.532 [2024-10-07 07:40:52.950943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.532 "name": "Existed_Raid", 00:19:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.532 "strip_size_kb": 64, 00:19:53.532 "state": "configuring", 00:19:53.532 "raid_level": "raid0", 00:19:53.532 "superblock": false, 00:19:53.532 "num_base_bdevs": 3, 00:19:53.532 "num_base_bdevs_discovered": 1, 00:19:53.532 "num_base_bdevs_operational": 3, 00:19:53.532 "base_bdevs_list": [ 00:19:53.532 { 00:19:53.532 "name": "BaseBdev1", 00:19:53.532 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:53.532 "is_configured": true, 00:19:53.532 "data_offset": 0, 00:19:53.532 "data_size": 65536 00:19:53.532 }, 00:19:53.532 { 00:19:53.532 "name": "BaseBdev2", 00:19:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.532 "is_configured": false, 00:19:53.532 "data_offset": 0, 00:19:53.532 "data_size": 0 00:19:53.532 }, 00:19:53.532 { 00:19:53.532 "name": "BaseBdev3", 00:19:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.532 "is_configured": false, 00:19:53.532 "data_offset": 0, 00:19:53.532 "data_size": 0 00:19:53.532 } 00:19:53.532 ] 00:19:53.532 }' 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.532 07:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.098 [2024-10-07 07:40:53.446237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.098 BaseBdev2 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.098 [ 00:19:54.098 { 00:19:54.098 "name": "BaseBdev2", 00:19:54.098 "aliases": [ 00:19:54.098 "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f" 00:19:54.098 ], 00:19:54.098 "product_name": "Malloc disk", 00:19:54.098 "block_size": 512, 00:19:54.098 "num_blocks": 65536, 00:19:54.098 "uuid": "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f", 00:19:54.098 "assigned_rate_limits": { 00:19:54.098 "rw_ios_per_sec": 0, 00:19:54.098 "rw_mbytes_per_sec": 0, 00:19:54.098 "r_mbytes_per_sec": 0, 00:19:54.098 "w_mbytes_per_sec": 0 00:19:54.098 }, 00:19:54.098 "claimed": true, 00:19:54.098 "claim_type": "exclusive_write", 00:19:54.098 "zoned": false, 00:19:54.098 "supported_io_types": { 00:19:54.098 "read": true, 00:19:54.098 "write": true, 00:19:54.098 "unmap": true, 00:19:54.098 "flush": true, 00:19:54.098 "reset": true, 00:19:54.098 "nvme_admin": false, 00:19:54.098 "nvme_io": false, 00:19:54.098 "nvme_io_md": false, 00:19:54.098 "write_zeroes": true, 00:19:54.098 "zcopy": true, 00:19:54.098 "get_zone_info": false, 00:19:54.098 "zone_management": false, 00:19:54.098 "zone_append": false, 00:19:54.098 "compare": false, 00:19:54.098 "compare_and_write": false, 00:19:54.098 "abort": true, 00:19:54.098 "seek_hole": false, 00:19:54.098 "seek_data": false, 00:19:54.098 "copy": true, 00:19:54.098 "nvme_iov_md": false 00:19:54.098 }, 00:19:54.098 "memory_domains": [ 00:19:54.098 { 00:19:54.098 "dma_device_id": "system", 00:19:54.098 "dma_device_type": 1 00:19:54.098 }, 00:19:54.098 { 00:19:54.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.098 "dma_device_type": 2 00:19:54.098 } 00:19:54.098 ], 00:19:54.098 "driver_specific": {} 00:19:54.098 } 00:19:54.098 ] 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.098 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.099 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.099 "name": "Existed_Raid", 00:19:54.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.099 "strip_size_kb": 64, 00:19:54.099 "state": "configuring", 00:19:54.099 "raid_level": "raid0", 00:19:54.099 "superblock": false, 00:19:54.099 "num_base_bdevs": 3, 00:19:54.099 "num_base_bdevs_discovered": 2, 00:19:54.099 "num_base_bdevs_operational": 3, 00:19:54.099 "base_bdevs_list": [ 00:19:54.099 { 00:19:54.099 "name": "BaseBdev1", 00:19:54.099 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:54.099 "is_configured": true, 00:19:54.099 "data_offset": 0, 00:19:54.099 "data_size": 65536 00:19:54.099 }, 00:19:54.099 { 00:19:54.099 "name": "BaseBdev2", 00:19:54.099 "uuid": "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f", 00:19:54.099 "is_configured": true, 00:19:54.099 "data_offset": 0, 00:19:54.099 "data_size": 65536 00:19:54.099 }, 00:19:54.099 { 00:19:54.099 "name": "BaseBdev3", 00:19:54.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.099 "is_configured": false, 00:19:54.099 "data_offset": 0, 00:19:54.099 "data_size": 0 00:19:54.099 } 00:19:54.099 ] 00:19:54.099 }' 00:19:54.099 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.099 07:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.785 07:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.785 [2024-10-07 07:40:54.045056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.785 [2024-10-07 07:40:54.045123] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:54.785 [2024-10-07 07:40:54.045144] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:54.785 [2024-10-07 07:40:54.045425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:54.785 [2024-10-07 07:40:54.045569] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:54.785 [2024-10-07 07:40:54.045584] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:54.785 [2024-10-07 07:40:54.045883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.785 BaseBdev3 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.785 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.785 [ 00:19:54.785 { 00:19:54.785 "name": "BaseBdev3", 00:19:54.785 "aliases": [ 00:19:54.786 "556b82d6-5191-497a-9b2f-cdf47d9924e3" 00:19:54.786 ], 00:19:54.786 "product_name": "Malloc disk", 00:19:54.786 "block_size": 512, 00:19:54.786 "num_blocks": 65536, 00:19:54.786 "uuid": "556b82d6-5191-497a-9b2f-cdf47d9924e3", 00:19:54.786 "assigned_rate_limits": { 00:19:54.786 "rw_ios_per_sec": 0, 00:19:54.786 "rw_mbytes_per_sec": 0, 00:19:54.786 "r_mbytes_per_sec": 0, 00:19:54.786 "w_mbytes_per_sec": 0 00:19:54.786 }, 00:19:54.786 "claimed": true, 00:19:54.786 "claim_type": "exclusive_write", 00:19:54.786 "zoned": false, 00:19:54.786 "supported_io_types": { 00:19:54.786 "read": true, 00:19:54.786 "write": true, 00:19:54.786 "unmap": true, 00:19:54.786 "flush": true, 00:19:54.786 "reset": true, 00:19:54.786 "nvme_admin": false, 00:19:54.786 "nvme_io": false, 00:19:54.786 "nvme_io_md": false, 00:19:54.786 "write_zeroes": true, 00:19:54.786 "zcopy": true, 00:19:54.786 "get_zone_info": false, 00:19:54.786 "zone_management": false, 00:19:54.786 "zone_append": false, 00:19:54.786 "compare": false, 00:19:54.786 "compare_and_write": false, 00:19:54.786 "abort": true, 00:19:54.786 "seek_hole": false, 00:19:54.786 "seek_data": false, 00:19:54.786 "copy": true, 00:19:54.786 "nvme_iov_md": false 00:19:54.786 }, 00:19:54.786 "memory_domains": [ 00:19:54.786 { 00:19:54.786 "dma_device_id": "system", 00:19:54.786 "dma_device_type": 1 00:19:54.786 }, 00:19:54.786 { 00:19:54.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.786 "dma_device_type": 2 00:19:54.786 } 00:19:54.786 ], 00:19:54.786 "driver_specific": {} 00:19:54.786 } 00:19:54.786 ] 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.786 "name": "Existed_Raid", 00:19:54.786 "uuid": "183278bc-27a3-4804-b5e1-a00596961066", 00:19:54.786 "strip_size_kb": 64, 00:19:54.786 "state": "online", 00:19:54.786 "raid_level": "raid0", 00:19:54.786 "superblock": false, 00:19:54.786 "num_base_bdevs": 3, 00:19:54.786 "num_base_bdevs_discovered": 3, 00:19:54.786 "num_base_bdevs_operational": 3, 00:19:54.786 "base_bdevs_list": [ 00:19:54.786 { 00:19:54.786 "name": "BaseBdev1", 00:19:54.786 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:54.786 "is_configured": true, 00:19:54.786 "data_offset": 0, 00:19:54.786 "data_size": 65536 00:19:54.786 }, 00:19:54.786 { 00:19:54.786 "name": "BaseBdev2", 00:19:54.786 "uuid": "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f", 00:19:54.786 "is_configured": true, 00:19:54.786 "data_offset": 0, 00:19:54.786 "data_size": 65536 00:19:54.786 }, 00:19:54.786 { 00:19:54.786 "name": "BaseBdev3", 00:19:54.786 "uuid": "556b82d6-5191-497a-9b2f-cdf47d9924e3", 00:19:54.786 "is_configured": true, 00:19:54.786 "data_offset": 0, 00:19:54.786 "data_size": 65536 00:19:54.786 } 00:19:54.786 ] 00:19:54.786 }' 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.786 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.044 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:55.045 [2024-10-07 07:40:54.545601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.045 "name": "Existed_Raid", 00:19:55.045 "aliases": [ 00:19:55.045 "183278bc-27a3-4804-b5e1-a00596961066" 00:19:55.045 ], 00:19:55.045 "product_name": "Raid Volume", 00:19:55.045 "block_size": 512, 00:19:55.045 "num_blocks": 196608, 00:19:55.045 "uuid": "183278bc-27a3-4804-b5e1-a00596961066", 00:19:55.045 "assigned_rate_limits": { 00:19:55.045 "rw_ios_per_sec": 0, 00:19:55.045 "rw_mbytes_per_sec": 0, 00:19:55.045 "r_mbytes_per_sec": 0, 00:19:55.045 "w_mbytes_per_sec": 0 00:19:55.045 }, 00:19:55.045 "claimed": false, 00:19:55.045 "zoned": false, 00:19:55.045 "supported_io_types": { 00:19:55.045 "read": true, 00:19:55.045 "write": true, 00:19:55.045 "unmap": true, 00:19:55.045 "flush": true, 00:19:55.045 "reset": true, 00:19:55.045 "nvme_admin": false, 00:19:55.045 "nvme_io": false, 00:19:55.045 "nvme_io_md": false, 00:19:55.045 "write_zeroes": true, 00:19:55.045 "zcopy": false, 00:19:55.045 "get_zone_info": false, 00:19:55.045 "zone_management": false, 00:19:55.045 "zone_append": false, 00:19:55.045 "compare": false, 00:19:55.045 "compare_and_write": false, 00:19:55.045 "abort": false, 00:19:55.045 "seek_hole": false, 00:19:55.045 "seek_data": false, 00:19:55.045 "copy": false, 00:19:55.045 "nvme_iov_md": false 00:19:55.045 }, 00:19:55.045 "memory_domains": [ 00:19:55.045 { 00:19:55.045 "dma_device_id": "system", 00:19:55.045 "dma_device_type": 1 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.045 "dma_device_type": 2 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "dma_device_id": "system", 00:19:55.045 "dma_device_type": 1 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.045 "dma_device_type": 2 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "dma_device_id": "system", 00:19:55.045 "dma_device_type": 1 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.045 "dma_device_type": 2 00:19:55.045 } 00:19:55.045 ], 00:19:55.045 "driver_specific": { 00:19:55.045 "raid": { 00:19:55.045 "uuid": "183278bc-27a3-4804-b5e1-a00596961066", 00:19:55.045 "strip_size_kb": 64, 00:19:55.045 "state": "online", 00:19:55.045 "raid_level": "raid0", 00:19:55.045 "superblock": false, 00:19:55.045 "num_base_bdevs": 3, 00:19:55.045 "num_base_bdevs_discovered": 3, 00:19:55.045 "num_base_bdevs_operational": 3, 00:19:55.045 "base_bdevs_list": [ 00:19:55.045 { 00:19:55.045 "name": "BaseBdev1", 00:19:55.045 "uuid": "9109d324-f4a0-4249-815b-5a49e755d137", 00:19:55.045 "is_configured": true, 00:19:55.045 "data_offset": 0, 00:19:55.045 "data_size": 65536 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "name": "BaseBdev2", 00:19:55.045 "uuid": "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f", 00:19:55.045 "is_configured": true, 00:19:55.045 "data_offset": 0, 00:19:55.045 "data_size": 65536 00:19:55.045 }, 00:19:55.045 { 00:19:55.045 "name": "BaseBdev3", 00:19:55.045 "uuid": "556b82d6-5191-497a-9b2f-cdf47d9924e3", 00:19:55.045 "is_configured": true, 00:19:55.045 "data_offset": 0, 00:19:55.045 "data_size": 65536 00:19:55.045 } 00:19:55.045 ] 00:19:55.045 } 00:19:55.045 } 00:19:55.045 }' 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:55.045 BaseBdev2 00:19:55.045 BaseBdev3' 00:19:55.045 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.302 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:55.302 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.302 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:55.302 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.303 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.303 [2024-10-07 07:40:54.777330] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.303 [2024-10-07 07:40:54.777367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.303 [2024-10-07 07:40:54.777426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.561 "name": "Existed_Raid", 00:19:55.561 "uuid": "183278bc-27a3-4804-b5e1-a00596961066", 00:19:55.561 "strip_size_kb": 64, 00:19:55.561 "state": "offline", 00:19:55.561 "raid_level": "raid0", 00:19:55.561 "superblock": false, 00:19:55.561 "num_base_bdevs": 3, 00:19:55.561 "num_base_bdevs_discovered": 2, 00:19:55.561 "num_base_bdevs_operational": 2, 00:19:55.561 "base_bdevs_list": [ 00:19:55.561 { 00:19:55.561 "name": null, 00:19:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.561 "is_configured": false, 00:19:55.561 "data_offset": 0, 00:19:55.561 "data_size": 65536 00:19:55.561 }, 00:19:55.561 { 00:19:55.561 "name": "BaseBdev2", 00:19:55.561 "uuid": "cd091048-d4f1-4e85-90ca-a7fcf80b4b9f", 00:19:55.561 "is_configured": true, 00:19:55.561 "data_offset": 0, 00:19:55.561 "data_size": 65536 00:19:55.561 }, 00:19:55.561 { 00:19:55.561 "name": "BaseBdev3", 00:19:55.561 "uuid": "556b82d6-5191-497a-9b2f-cdf47d9924e3", 00:19:55.561 "is_configured": true, 00:19:55.561 "data_offset": 0, 00:19:55.561 "data_size": 65536 00:19:55.561 } 00:19:55.561 ] 00:19:55.561 }' 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.561 07:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.818 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:55.818 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:55.818 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:55.818 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:55.819 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.819 [2024-10-07 07:40:55.364787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.077 [2024-10-07 07:40:55.521208] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:56.077 [2024-10-07 07:40:55.521411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.077 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.336 BaseBdev2 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.336 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.336 [ 00:19:56.336 { 00:19:56.336 "name": "BaseBdev2", 00:19:56.336 "aliases": [ 00:19:56.336 "a9c1be0f-2140-41d9-88f6-76b28560f9bf" 00:19:56.336 ], 00:19:56.336 "product_name": "Malloc disk", 00:19:56.336 "block_size": 512, 00:19:56.336 "num_blocks": 65536, 00:19:56.336 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:56.336 "assigned_rate_limits": { 00:19:56.336 "rw_ios_per_sec": 0, 00:19:56.336 "rw_mbytes_per_sec": 0, 00:19:56.336 "r_mbytes_per_sec": 0, 00:19:56.336 "w_mbytes_per_sec": 0 00:19:56.336 }, 00:19:56.336 "claimed": false, 00:19:56.336 "zoned": false, 00:19:56.336 "supported_io_types": { 00:19:56.336 "read": true, 00:19:56.336 "write": true, 00:19:56.336 "unmap": true, 00:19:56.336 "flush": true, 00:19:56.336 "reset": true, 00:19:56.336 "nvme_admin": false, 00:19:56.336 "nvme_io": false, 00:19:56.336 "nvme_io_md": false, 00:19:56.336 "write_zeroes": true, 00:19:56.336 "zcopy": true, 00:19:56.336 "get_zone_info": false, 00:19:56.336 "zone_management": false, 00:19:56.336 "zone_append": false, 00:19:56.336 "compare": false, 00:19:56.337 "compare_and_write": false, 00:19:56.337 "abort": true, 00:19:56.337 "seek_hole": false, 00:19:56.337 "seek_data": false, 00:19:56.337 "copy": true, 00:19:56.337 "nvme_iov_md": false 00:19:56.337 }, 00:19:56.337 "memory_domains": [ 00:19:56.337 { 00:19:56.337 "dma_device_id": "system", 00:19:56.337 "dma_device_type": 1 00:19:56.337 }, 00:19:56.337 { 00:19:56.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.337 "dma_device_type": 2 00:19:56.337 } 00:19:56.337 ], 00:19:56.337 "driver_specific": {} 00:19:56.337 } 00:19:56.337 ] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.337 BaseBdev3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.337 [ 00:19:56.337 { 00:19:56.337 "name": "BaseBdev3", 00:19:56.337 "aliases": [ 00:19:56.337 "7b3b8b81-8ea0-4366-8cfa-5118a619e658" 00:19:56.337 ], 00:19:56.337 "product_name": "Malloc disk", 00:19:56.337 "block_size": 512, 00:19:56.337 "num_blocks": 65536, 00:19:56.337 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:56.337 "assigned_rate_limits": { 00:19:56.337 "rw_ios_per_sec": 0, 00:19:56.337 "rw_mbytes_per_sec": 0, 00:19:56.337 "r_mbytes_per_sec": 0, 00:19:56.337 "w_mbytes_per_sec": 0 00:19:56.337 }, 00:19:56.337 "claimed": false, 00:19:56.337 "zoned": false, 00:19:56.337 "supported_io_types": { 00:19:56.337 "read": true, 00:19:56.337 "write": true, 00:19:56.337 "unmap": true, 00:19:56.337 "flush": true, 00:19:56.337 "reset": true, 00:19:56.337 "nvme_admin": false, 00:19:56.337 "nvme_io": false, 00:19:56.337 "nvme_io_md": false, 00:19:56.337 "write_zeroes": true, 00:19:56.337 "zcopy": true, 00:19:56.337 "get_zone_info": false, 00:19:56.337 "zone_management": false, 00:19:56.337 "zone_append": false, 00:19:56.337 "compare": false, 00:19:56.337 "compare_and_write": false, 00:19:56.337 "abort": true, 00:19:56.337 "seek_hole": false, 00:19:56.337 "seek_data": false, 00:19:56.337 "copy": true, 00:19:56.337 "nvme_iov_md": false 00:19:56.337 }, 00:19:56.337 "memory_domains": [ 00:19:56.337 { 00:19:56.337 "dma_device_id": "system", 00:19:56.337 "dma_device_type": 1 00:19:56.337 }, 00:19:56.337 { 00:19:56.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.337 "dma_device_type": 2 00:19:56.337 } 00:19:56.337 ], 00:19:56.337 "driver_specific": {} 00:19:56.337 } 00:19:56.337 ] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.337 [2024-10-07 07:40:55.839521] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:56.337 [2024-10-07 07:40:55.839582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:56.337 [2024-10-07 07:40:55.839612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.337 [2024-10-07 07:40:55.841899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.337 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.596 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.596 "name": "Existed_Raid", 00:19:56.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.596 "strip_size_kb": 64, 00:19:56.596 "state": "configuring", 00:19:56.596 "raid_level": "raid0", 00:19:56.596 "superblock": false, 00:19:56.596 "num_base_bdevs": 3, 00:19:56.596 "num_base_bdevs_discovered": 2, 00:19:56.596 "num_base_bdevs_operational": 3, 00:19:56.596 "base_bdevs_list": [ 00:19:56.596 { 00:19:56.596 "name": "BaseBdev1", 00:19:56.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.596 "is_configured": false, 00:19:56.596 "data_offset": 0, 00:19:56.596 "data_size": 0 00:19:56.596 }, 00:19:56.596 { 00:19:56.596 "name": "BaseBdev2", 00:19:56.596 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:56.596 "is_configured": true, 00:19:56.596 "data_offset": 0, 00:19:56.596 "data_size": 65536 00:19:56.596 }, 00:19:56.596 { 00:19:56.596 "name": "BaseBdev3", 00:19:56.596 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:56.596 "is_configured": true, 00:19:56.596 "data_offset": 0, 00:19:56.596 "data_size": 65536 00:19:56.596 } 00:19:56.596 ] 00:19:56.596 }' 00:19:56.596 07:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.596 07:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.854 [2024-10-07 07:40:56.275579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.854 "name": "Existed_Raid", 00:19:56.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.854 "strip_size_kb": 64, 00:19:56.854 "state": "configuring", 00:19:56.854 "raid_level": "raid0", 00:19:56.854 "superblock": false, 00:19:56.854 "num_base_bdevs": 3, 00:19:56.854 "num_base_bdevs_discovered": 1, 00:19:56.854 "num_base_bdevs_operational": 3, 00:19:56.854 "base_bdevs_list": [ 00:19:56.854 { 00:19:56.854 "name": "BaseBdev1", 00:19:56.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.854 "is_configured": false, 00:19:56.854 "data_offset": 0, 00:19:56.854 "data_size": 0 00:19:56.854 }, 00:19:56.854 { 00:19:56.854 "name": null, 00:19:56.854 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:56.854 "is_configured": false, 00:19:56.854 "data_offset": 0, 00:19:56.854 "data_size": 65536 00:19:56.854 }, 00:19:56.854 { 00:19:56.854 "name": "BaseBdev3", 00:19:56.854 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:56.854 "is_configured": true, 00:19:56.854 "data_offset": 0, 00:19:56.854 "data_size": 65536 00:19:56.854 } 00:19:56.854 ] 00:19:56.854 }' 00:19:56.854 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.855 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.422 [2024-10-07 07:40:56.831341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.422 BaseBdev1 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.422 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.422 [ 00:19:57.422 { 00:19:57.422 "name": "BaseBdev1", 00:19:57.422 "aliases": [ 00:19:57.422 "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7" 00:19:57.422 ], 00:19:57.422 "product_name": "Malloc disk", 00:19:57.422 "block_size": 512, 00:19:57.422 "num_blocks": 65536, 00:19:57.422 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:57.422 "assigned_rate_limits": { 00:19:57.422 "rw_ios_per_sec": 0, 00:19:57.422 "rw_mbytes_per_sec": 0, 00:19:57.422 "r_mbytes_per_sec": 0, 00:19:57.422 "w_mbytes_per_sec": 0 00:19:57.422 }, 00:19:57.422 "claimed": true, 00:19:57.422 "claim_type": "exclusive_write", 00:19:57.422 "zoned": false, 00:19:57.422 "supported_io_types": { 00:19:57.422 "read": true, 00:19:57.422 "write": true, 00:19:57.422 "unmap": true, 00:19:57.422 "flush": true, 00:19:57.422 "reset": true, 00:19:57.423 "nvme_admin": false, 00:19:57.423 "nvme_io": false, 00:19:57.423 "nvme_io_md": false, 00:19:57.423 "write_zeroes": true, 00:19:57.423 "zcopy": true, 00:19:57.423 "get_zone_info": false, 00:19:57.423 "zone_management": false, 00:19:57.423 "zone_append": false, 00:19:57.423 "compare": false, 00:19:57.423 "compare_and_write": false, 00:19:57.423 "abort": true, 00:19:57.423 "seek_hole": false, 00:19:57.423 "seek_data": false, 00:19:57.423 "copy": true, 00:19:57.423 "nvme_iov_md": false 00:19:57.423 }, 00:19:57.423 "memory_domains": [ 00:19:57.423 { 00:19:57.423 "dma_device_id": "system", 00:19:57.423 "dma_device_type": 1 00:19:57.423 }, 00:19:57.423 { 00:19:57.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.423 "dma_device_type": 2 00:19:57.423 } 00:19:57.423 ], 00:19:57.423 "driver_specific": {} 00:19:57.423 } 00:19:57.423 ] 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.423 "name": "Existed_Raid", 00:19:57.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.423 "strip_size_kb": 64, 00:19:57.423 "state": "configuring", 00:19:57.423 "raid_level": "raid0", 00:19:57.423 "superblock": false, 00:19:57.423 "num_base_bdevs": 3, 00:19:57.423 "num_base_bdevs_discovered": 2, 00:19:57.423 "num_base_bdevs_operational": 3, 00:19:57.423 "base_bdevs_list": [ 00:19:57.423 { 00:19:57.423 "name": "BaseBdev1", 00:19:57.423 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:57.423 "is_configured": true, 00:19:57.423 "data_offset": 0, 00:19:57.423 "data_size": 65536 00:19:57.423 }, 00:19:57.423 { 00:19:57.423 "name": null, 00:19:57.423 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:57.423 "is_configured": false, 00:19:57.423 "data_offset": 0, 00:19:57.423 "data_size": 65536 00:19:57.423 }, 00:19:57.423 { 00:19:57.423 "name": "BaseBdev3", 00:19:57.423 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:57.423 "is_configured": true, 00:19:57.423 "data_offset": 0, 00:19:57.423 "data_size": 65536 00:19:57.423 } 00:19:57.423 ] 00:19:57.423 }' 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.423 07:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.989 [2024-10-07 07:40:57.367548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:57.989 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.989 "name": "Existed_Raid", 00:19:57.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.989 "strip_size_kb": 64, 00:19:57.989 "state": "configuring", 00:19:57.989 "raid_level": "raid0", 00:19:57.989 "superblock": false, 00:19:57.989 "num_base_bdevs": 3, 00:19:57.989 "num_base_bdevs_discovered": 1, 00:19:57.989 "num_base_bdevs_operational": 3, 00:19:57.989 "base_bdevs_list": [ 00:19:57.989 { 00:19:57.989 "name": "BaseBdev1", 00:19:57.989 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:57.989 "is_configured": true, 00:19:57.989 "data_offset": 0, 00:19:57.990 "data_size": 65536 00:19:57.990 }, 00:19:57.990 { 00:19:57.990 "name": null, 00:19:57.990 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:57.990 "is_configured": false, 00:19:57.990 "data_offset": 0, 00:19:57.990 "data_size": 65536 00:19:57.990 }, 00:19:57.990 { 00:19:57.990 "name": null, 00:19:57.990 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:57.990 "is_configured": false, 00:19:57.990 "data_offset": 0, 00:19:57.990 "data_size": 65536 00:19:57.990 } 00:19:57.990 ] 00:19:57.990 }' 00:19:57.990 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.990 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.555 [2024-10-07 07:40:57.875656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.555 "name": "Existed_Raid", 00:19:58.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.555 "strip_size_kb": 64, 00:19:58.555 "state": "configuring", 00:19:58.555 "raid_level": "raid0", 00:19:58.555 "superblock": false, 00:19:58.555 "num_base_bdevs": 3, 00:19:58.555 "num_base_bdevs_discovered": 2, 00:19:58.555 "num_base_bdevs_operational": 3, 00:19:58.555 "base_bdevs_list": [ 00:19:58.555 { 00:19:58.555 "name": "BaseBdev1", 00:19:58.555 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:58.555 "is_configured": true, 00:19:58.555 "data_offset": 0, 00:19:58.555 "data_size": 65536 00:19:58.555 }, 00:19:58.555 { 00:19:58.555 "name": null, 00:19:58.555 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:58.555 "is_configured": false, 00:19:58.555 "data_offset": 0, 00:19:58.555 "data_size": 65536 00:19:58.555 }, 00:19:58.555 { 00:19:58.555 "name": "BaseBdev3", 00:19:58.555 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:58.555 "is_configured": true, 00:19:58.555 "data_offset": 0, 00:19:58.555 "data_size": 65536 00:19:58.555 } 00:19:58.555 ] 00:19:58.555 }' 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.555 07:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:58.813 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.813 [2024-10-07 07:40:58.359827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.070 "name": "Existed_Raid", 00:19:59.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.070 "strip_size_kb": 64, 00:19:59.070 "state": "configuring", 00:19:59.070 "raid_level": "raid0", 00:19:59.070 "superblock": false, 00:19:59.070 "num_base_bdevs": 3, 00:19:59.070 "num_base_bdevs_discovered": 1, 00:19:59.070 "num_base_bdevs_operational": 3, 00:19:59.070 "base_bdevs_list": [ 00:19:59.070 { 00:19:59.070 "name": null, 00:19:59.070 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:59.070 "is_configured": false, 00:19:59.070 "data_offset": 0, 00:19:59.070 "data_size": 65536 00:19:59.070 }, 00:19:59.070 { 00:19:59.070 "name": null, 00:19:59.070 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:59.070 "is_configured": false, 00:19:59.070 "data_offset": 0, 00:19:59.070 "data_size": 65536 00:19:59.070 }, 00:19:59.070 { 00:19:59.070 "name": "BaseBdev3", 00:19:59.070 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:59.070 "is_configured": true, 00:19:59.070 "data_offset": 0, 00:19:59.070 "data_size": 65536 00:19:59.070 } 00:19:59.070 ] 00:19:59.070 }' 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.070 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.634 [2024-10-07 07:40:58.979335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.634 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.635 07:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.635 07:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.635 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:19:59.635 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.635 "name": "Existed_Raid", 00:19:59.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.635 "strip_size_kb": 64, 00:19:59.635 "state": "configuring", 00:19:59.635 "raid_level": "raid0", 00:19:59.635 "superblock": false, 00:19:59.635 "num_base_bdevs": 3, 00:19:59.635 "num_base_bdevs_discovered": 2, 00:19:59.635 "num_base_bdevs_operational": 3, 00:19:59.635 "base_bdevs_list": [ 00:19:59.635 { 00:19:59.635 "name": null, 00:19:59.635 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:19:59.635 "is_configured": false, 00:19:59.635 "data_offset": 0, 00:19:59.635 "data_size": 65536 00:19:59.635 }, 00:19:59.635 { 00:19:59.635 "name": "BaseBdev2", 00:19:59.635 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:19:59.635 "is_configured": true, 00:19:59.635 "data_offset": 0, 00:19:59.635 "data_size": 65536 00:19:59.635 }, 00:19:59.635 { 00:19:59.635 "name": "BaseBdev3", 00:19:59.635 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:19:59.635 "is_configured": true, 00:19:59.635 "data_offset": 0, 00:19:59.635 "data_size": 65536 00:19:59.635 } 00:19:59.635 ] 00:19:59.635 }' 00:19:59.635 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.635 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.893 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:59.893 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.893 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:19:59.893 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.151 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 [2024-10-07 07:40:59.575107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:00.152 [2024-10-07 07:40:59.575186] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:00.152 [2024-10-07 07:40:59.575201] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:00.152 [2024-10-07 07:40:59.575553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:00.152 [2024-10-07 07:40:59.575746] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:00.152 [2024-10-07 07:40:59.575759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:00.152 NewBaseBdev 00:20:00.152 [2024-10-07 07:40:59.576108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 [ 00:20:00.152 { 00:20:00.152 "name": "NewBaseBdev", 00:20:00.152 "aliases": [ 00:20:00.152 "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7" 00:20:00.152 ], 00:20:00.152 "product_name": "Malloc disk", 00:20:00.152 "block_size": 512, 00:20:00.152 "num_blocks": 65536, 00:20:00.152 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:20:00.152 "assigned_rate_limits": { 00:20:00.152 "rw_ios_per_sec": 0, 00:20:00.152 "rw_mbytes_per_sec": 0, 00:20:00.152 "r_mbytes_per_sec": 0, 00:20:00.152 "w_mbytes_per_sec": 0 00:20:00.152 }, 00:20:00.152 "claimed": true, 00:20:00.152 "claim_type": "exclusive_write", 00:20:00.152 "zoned": false, 00:20:00.152 "supported_io_types": { 00:20:00.152 "read": true, 00:20:00.152 "write": true, 00:20:00.152 "unmap": true, 00:20:00.152 "flush": true, 00:20:00.152 "reset": true, 00:20:00.152 "nvme_admin": false, 00:20:00.152 "nvme_io": false, 00:20:00.152 "nvme_io_md": false, 00:20:00.152 "write_zeroes": true, 00:20:00.152 "zcopy": true, 00:20:00.152 "get_zone_info": false, 00:20:00.152 "zone_management": false, 00:20:00.152 "zone_append": false, 00:20:00.152 "compare": false, 00:20:00.152 "compare_and_write": false, 00:20:00.152 "abort": true, 00:20:00.152 "seek_hole": false, 00:20:00.152 "seek_data": false, 00:20:00.152 "copy": true, 00:20:00.152 "nvme_iov_md": false 00:20:00.152 }, 00:20:00.152 "memory_domains": [ 00:20:00.152 { 00:20:00.152 "dma_device_id": "system", 00:20:00.152 "dma_device_type": 1 00:20:00.152 }, 00:20:00.152 { 00:20:00.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.152 "dma_device_type": 2 00:20:00.152 } 00:20:00.152 ], 00:20:00.152 "driver_specific": {} 00:20:00.152 } 00:20:00.152 ] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.152 "name": "Existed_Raid", 00:20:00.152 "uuid": "d6131822-0e86-4933-be83-c1f66cff50ca", 00:20:00.152 "strip_size_kb": 64, 00:20:00.152 "state": "online", 00:20:00.152 "raid_level": "raid0", 00:20:00.152 "superblock": false, 00:20:00.152 "num_base_bdevs": 3, 00:20:00.152 "num_base_bdevs_discovered": 3, 00:20:00.152 "num_base_bdevs_operational": 3, 00:20:00.152 "base_bdevs_list": [ 00:20:00.152 { 00:20:00.152 "name": "NewBaseBdev", 00:20:00.152 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:20:00.152 "is_configured": true, 00:20:00.152 "data_offset": 0, 00:20:00.152 "data_size": 65536 00:20:00.152 }, 00:20:00.152 { 00:20:00.152 "name": "BaseBdev2", 00:20:00.152 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:20:00.152 "is_configured": true, 00:20:00.152 "data_offset": 0, 00:20:00.152 "data_size": 65536 00:20:00.152 }, 00:20:00.152 { 00:20:00.152 "name": "BaseBdev3", 00:20:00.152 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:20:00.152 "is_configured": true, 00:20:00.152 "data_offset": 0, 00:20:00.152 "data_size": 65536 00:20:00.152 } 00:20:00.152 ] 00:20:00.152 }' 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.152 07:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 [2024-10-07 07:41:00.119743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.723 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:00.723 "name": "Existed_Raid", 00:20:00.723 "aliases": [ 00:20:00.723 "d6131822-0e86-4933-be83-c1f66cff50ca" 00:20:00.723 ], 00:20:00.723 "product_name": "Raid Volume", 00:20:00.723 "block_size": 512, 00:20:00.723 "num_blocks": 196608, 00:20:00.723 "uuid": "d6131822-0e86-4933-be83-c1f66cff50ca", 00:20:00.723 "assigned_rate_limits": { 00:20:00.723 "rw_ios_per_sec": 0, 00:20:00.723 "rw_mbytes_per_sec": 0, 00:20:00.723 "r_mbytes_per_sec": 0, 00:20:00.723 "w_mbytes_per_sec": 0 00:20:00.723 }, 00:20:00.723 "claimed": false, 00:20:00.723 "zoned": false, 00:20:00.723 "supported_io_types": { 00:20:00.723 "read": true, 00:20:00.723 "write": true, 00:20:00.723 "unmap": true, 00:20:00.723 "flush": true, 00:20:00.723 "reset": true, 00:20:00.723 "nvme_admin": false, 00:20:00.723 "nvme_io": false, 00:20:00.723 "nvme_io_md": false, 00:20:00.723 "write_zeroes": true, 00:20:00.723 "zcopy": false, 00:20:00.723 "get_zone_info": false, 00:20:00.723 "zone_management": false, 00:20:00.723 "zone_append": false, 00:20:00.723 "compare": false, 00:20:00.723 "compare_and_write": false, 00:20:00.723 "abort": false, 00:20:00.723 "seek_hole": false, 00:20:00.723 "seek_data": false, 00:20:00.723 "copy": false, 00:20:00.723 "nvme_iov_md": false 00:20:00.723 }, 00:20:00.724 "memory_domains": [ 00:20:00.724 { 00:20:00.724 "dma_device_id": "system", 00:20:00.724 "dma_device_type": 1 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.724 "dma_device_type": 2 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "dma_device_id": "system", 00:20:00.724 "dma_device_type": 1 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.724 "dma_device_type": 2 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "dma_device_id": "system", 00:20:00.724 "dma_device_type": 1 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.724 "dma_device_type": 2 00:20:00.724 } 00:20:00.724 ], 00:20:00.724 "driver_specific": { 00:20:00.724 "raid": { 00:20:00.724 "uuid": "d6131822-0e86-4933-be83-c1f66cff50ca", 00:20:00.724 "strip_size_kb": 64, 00:20:00.724 "state": "online", 00:20:00.724 "raid_level": "raid0", 00:20:00.724 "superblock": false, 00:20:00.724 "num_base_bdevs": 3, 00:20:00.724 "num_base_bdevs_discovered": 3, 00:20:00.724 "num_base_bdevs_operational": 3, 00:20:00.724 "base_bdevs_list": [ 00:20:00.724 { 00:20:00.724 "name": "NewBaseBdev", 00:20:00.724 "uuid": "f61fd2a5-0116-4c4a-aec5-3ae46bc91aa7", 00:20:00.724 "is_configured": true, 00:20:00.724 "data_offset": 0, 00:20:00.724 "data_size": 65536 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "name": "BaseBdev2", 00:20:00.724 "uuid": "a9c1be0f-2140-41d9-88f6-76b28560f9bf", 00:20:00.724 "is_configured": true, 00:20:00.724 "data_offset": 0, 00:20:00.724 "data_size": 65536 00:20:00.724 }, 00:20:00.724 { 00:20:00.724 "name": "BaseBdev3", 00:20:00.724 "uuid": "7b3b8b81-8ea0-4366-8cfa-5118a619e658", 00:20:00.724 "is_configured": true, 00:20:00.724 "data_offset": 0, 00:20:00.724 "data_size": 65536 00:20:00.724 } 00:20:00.724 ] 00:20:00.724 } 00:20:00.724 } 00:20:00.724 }' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:00.724 BaseBdev2 00:20:00.724 BaseBdev3' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.724 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.991 [2024-10-07 07:41:00.371353] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.991 [2024-10-07 07:41:00.371403] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.991 [2024-10-07 07:41:00.371518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.991 [2024-10-07 07:41:00.371594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.991 [2024-10-07 07:41:00.371611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63864 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 63864 ']' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 63864 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 63864 00:20:00.991 killing process with pid 63864 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 63864' 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 63864 00:20:00.991 [2024-10-07 07:41:00.413545] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.991 07:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 63864 00:20:01.250 [2024-10-07 07:41:00.782839] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:03.153 00:20:03.153 real 0m11.613s 00:20:03.153 user 0m18.240s 00:20:03.153 sys 0m1.971s 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.153 ************************************ 00:20:03.153 END TEST raid_state_function_test 00:20:03.153 ************************************ 00:20:03.153 07:41:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:20:03.153 07:41:02 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:03.153 07:41:02 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:03.153 07:41:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.153 ************************************ 00:20:03.153 START TEST raid_state_function_test_sb 00:20:03.153 ************************************ 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 3 true 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:03.153 Process raid pid: 64502 00:20:03.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64502 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64502' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64502 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 64502 ']' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.153 07:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:03.153 [2024-10-07 07:41:02.587061] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:03.153 [2024-10-07 07:41:02.587471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.412 [2024-10-07 07:41:02.775111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.670 [2024-10-07 07:41:03.070671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.928 [2024-10-07 07:41:03.283838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.928 [2024-10-07 07:41:03.284065] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 [2024-10-07 07:41:03.528122] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.188 [2024-10-07 07:41:03.528181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.188 [2024-10-07 07:41:03.528193] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.188 [2024-10-07 07:41:03.528208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.188 [2024-10-07 07:41:03.528216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:04.188 [2024-10-07 07:41:03.528229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.188 "name": "Existed_Raid", 00:20:04.188 "uuid": "5495240d-cdd5-4268-a168-318546c82054", 00:20:04.188 "strip_size_kb": 64, 00:20:04.188 "state": "configuring", 00:20:04.188 "raid_level": "raid0", 00:20:04.188 "superblock": true, 00:20:04.188 "num_base_bdevs": 3, 00:20:04.188 "num_base_bdevs_discovered": 0, 00:20:04.188 "num_base_bdevs_operational": 3, 00:20:04.188 "base_bdevs_list": [ 00:20:04.188 { 00:20:04.188 "name": "BaseBdev1", 00:20:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.188 "is_configured": false, 00:20:04.188 "data_offset": 0, 00:20:04.188 "data_size": 0 00:20:04.188 }, 00:20:04.188 { 00:20:04.188 "name": "BaseBdev2", 00:20:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.188 "is_configured": false, 00:20:04.188 "data_offset": 0, 00:20:04.188 "data_size": 0 00:20:04.188 }, 00:20:04.188 { 00:20:04.188 "name": "BaseBdev3", 00:20:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.188 "is_configured": false, 00:20:04.188 "data_offset": 0, 00:20:04.188 "data_size": 0 00:20:04.188 } 00:20:04.188 ] 00:20:04.188 }' 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.188 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.474 [2024-10-07 07:41:03.984137] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:04.474 [2024-10-07 07:41:03.984186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.474 [2024-10-07 07:41:03.992169] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.474 [2024-10-07 07:41:03.992224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.474 [2024-10-07 07:41:03.992236] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.474 [2024-10-07 07:41:03.992249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.474 [2024-10-07 07:41:03.992258] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:04.474 [2024-10-07 07:41:03.992272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.474 07:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.733 [2024-10-07 07:41:04.056283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.733 BaseBdev1 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.733 [ 00:20:04.733 { 00:20:04.733 "name": "BaseBdev1", 00:20:04.733 "aliases": [ 00:20:04.733 "02356041-2eb8-47a8-8152-785e580c6082" 00:20:04.733 ], 00:20:04.733 "product_name": "Malloc disk", 00:20:04.733 "block_size": 512, 00:20:04.733 "num_blocks": 65536, 00:20:04.733 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:04.733 "assigned_rate_limits": { 00:20:04.733 "rw_ios_per_sec": 0, 00:20:04.733 "rw_mbytes_per_sec": 0, 00:20:04.733 "r_mbytes_per_sec": 0, 00:20:04.733 "w_mbytes_per_sec": 0 00:20:04.733 }, 00:20:04.733 "claimed": true, 00:20:04.733 "claim_type": "exclusive_write", 00:20:04.733 "zoned": false, 00:20:04.733 "supported_io_types": { 00:20:04.733 "read": true, 00:20:04.733 "write": true, 00:20:04.733 "unmap": true, 00:20:04.733 "flush": true, 00:20:04.733 "reset": true, 00:20:04.733 "nvme_admin": false, 00:20:04.733 "nvme_io": false, 00:20:04.733 "nvme_io_md": false, 00:20:04.733 "write_zeroes": true, 00:20:04.733 "zcopy": true, 00:20:04.733 "get_zone_info": false, 00:20:04.733 "zone_management": false, 00:20:04.733 "zone_append": false, 00:20:04.733 "compare": false, 00:20:04.733 "compare_and_write": false, 00:20:04.733 "abort": true, 00:20:04.733 "seek_hole": false, 00:20:04.733 "seek_data": false, 00:20:04.733 "copy": true, 00:20:04.733 "nvme_iov_md": false 00:20:04.733 }, 00:20:04.733 "memory_domains": [ 00:20:04.733 { 00:20:04.733 "dma_device_id": "system", 00:20:04.733 "dma_device_type": 1 00:20:04.733 }, 00:20:04.733 { 00:20:04.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.733 "dma_device_type": 2 00:20:04.733 } 00:20:04.733 ], 00:20:04.733 "driver_specific": {} 00:20:04.733 } 00:20:04.733 ] 00:20:04.733 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.734 "name": "Existed_Raid", 00:20:04.734 "uuid": "27b9f9f7-7c53-44ec-8d51-d10efb49011d", 00:20:04.734 "strip_size_kb": 64, 00:20:04.734 "state": "configuring", 00:20:04.734 "raid_level": "raid0", 00:20:04.734 "superblock": true, 00:20:04.734 "num_base_bdevs": 3, 00:20:04.734 "num_base_bdevs_discovered": 1, 00:20:04.734 "num_base_bdevs_operational": 3, 00:20:04.734 "base_bdevs_list": [ 00:20:04.734 { 00:20:04.734 "name": "BaseBdev1", 00:20:04.734 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:04.734 "is_configured": true, 00:20:04.734 "data_offset": 2048, 00:20:04.734 "data_size": 63488 00:20:04.734 }, 00:20:04.734 { 00:20:04.734 "name": "BaseBdev2", 00:20:04.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.734 "is_configured": false, 00:20:04.734 "data_offset": 0, 00:20:04.734 "data_size": 0 00:20:04.734 }, 00:20:04.734 { 00:20:04.734 "name": "BaseBdev3", 00:20:04.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.734 "is_configured": false, 00:20:04.734 "data_offset": 0, 00:20:04.734 "data_size": 0 00:20:04.734 } 00:20:04.734 ] 00:20:04.734 }' 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.734 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.993 [2024-10-07 07:41:04.524993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:04.993 [2024-10-07 07:41:04.525182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.993 [2024-10-07 07:41:04.537040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.993 [2024-10-07 07:41:04.539399] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.993 [2024-10-07 07:41:04.539585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.993 [2024-10-07 07:41:04.539605] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:04.993 [2024-10-07 07:41:04.539619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:04.993 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.252 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.252 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.252 "name": "Existed_Raid", 00:20:05.252 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:05.252 "strip_size_kb": 64, 00:20:05.252 "state": "configuring", 00:20:05.252 "raid_level": "raid0", 00:20:05.252 "superblock": true, 00:20:05.252 "num_base_bdevs": 3, 00:20:05.252 "num_base_bdevs_discovered": 1, 00:20:05.252 "num_base_bdevs_operational": 3, 00:20:05.252 "base_bdevs_list": [ 00:20:05.252 { 00:20:05.252 "name": "BaseBdev1", 00:20:05.252 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:05.252 "is_configured": true, 00:20:05.252 "data_offset": 2048, 00:20:05.252 "data_size": 63488 00:20:05.252 }, 00:20:05.252 { 00:20:05.252 "name": "BaseBdev2", 00:20:05.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.252 "is_configured": false, 00:20:05.252 "data_offset": 0, 00:20:05.252 "data_size": 0 00:20:05.252 }, 00:20:05.252 { 00:20:05.252 "name": "BaseBdev3", 00:20:05.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.252 "is_configured": false, 00:20:05.252 "data_offset": 0, 00:20:05.252 "data_size": 0 00:20:05.252 } 00:20:05.252 ] 00:20:05.252 }' 00:20:05.252 07:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.252 07:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 [2024-10-07 07:41:05.045952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.510 BaseBdev2 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:05.510 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 [ 00:20:05.510 { 00:20:05.510 "name": "BaseBdev2", 00:20:05.510 "aliases": [ 00:20:05.510 "84341e05-df8c-46a8-9a3a-b908b242f880" 00:20:05.510 ], 00:20:05.510 "product_name": "Malloc disk", 00:20:05.510 "block_size": 512, 00:20:05.510 "num_blocks": 65536, 00:20:05.510 "uuid": "84341e05-df8c-46a8-9a3a-b908b242f880", 00:20:05.510 "assigned_rate_limits": { 00:20:05.510 "rw_ios_per_sec": 0, 00:20:05.769 "rw_mbytes_per_sec": 0, 00:20:05.769 "r_mbytes_per_sec": 0, 00:20:05.769 "w_mbytes_per_sec": 0 00:20:05.769 }, 00:20:05.769 "claimed": true, 00:20:05.769 "claim_type": "exclusive_write", 00:20:05.769 "zoned": false, 00:20:05.769 "supported_io_types": { 00:20:05.769 "read": true, 00:20:05.769 "write": true, 00:20:05.769 "unmap": true, 00:20:05.769 "flush": true, 00:20:05.769 "reset": true, 00:20:05.769 "nvme_admin": false, 00:20:05.769 "nvme_io": false, 00:20:05.769 "nvme_io_md": false, 00:20:05.769 "write_zeroes": true, 00:20:05.769 "zcopy": true, 00:20:05.769 "get_zone_info": false, 00:20:05.769 "zone_management": false, 00:20:05.769 "zone_append": false, 00:20:05.769 "compare": false, 00:20:05.769 "compare_and_write": false, 00:20:05.769 "abort": true, 00:20:05.769 "seek_hole": false, 00:20:05.769 "seek_data": false, 00:20:05.769 "copy": true, 00:20:05.769 "nvme_iov_md": false 00:20:05.769 }, 00:20:05.769 "memory_domains": [ 00:20:05.769 { 00:20:05.769 "dma_device_id": "system", 00:20:05.769 "dma_device_type": 1 00:20:05.769 }, 00:20:05.769 { 00:20:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.769 "dma_device_type": 2 00:20:05.769 } 00:20:05.769 ], 00:20:05.769 "driver_specific": {} 00:20:05.769 } 00:20:05.769 ] 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.769 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.770 "name": "Existed_Raid", 00:20:05.770 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:05.770 "strip_size_kb": 64, 00:20:05.770 "state": "configuring", 00:20:05.770 "raid_level": "raid0", 00:20:05.770 "superblock": true, 00:20:05.770 "num_base_bdevs": 3, 00:20:05.770 "num_base_bdevs_discovered": 2, 00:20:05.770 "num_base_bdevs_operational": 3, 00:20:05.770 "base_bdevs_list": [ 00:20:05.770 { 00:20:05.770 "name": "BaseBdev1", 00:20:05.770 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:05.770 "is_configured": true, 00:20:05.770 "data_offset": 2048, 00:20:05.770 "data_size": 63488 00:20:05.770 }, 00:20:05.770 { 00:20:05.770 "name": "BaseBdev2", 00:20:05.770 "uuid": "84341e05-df8c-46a8-9a3a-b908b242f880", 00:20:05.770 "is_configured": true, 00:20:05.770 "data_offset": 2048, 00:20:05.770 "data_size": 63488 00:20:05.770 }, 00:20:05.770 { 00:20:05.770 "name": "BaseBdev3", 00:20:05.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.770 "is_configured": false, 00:20:05.770 "data_offset": 0, 00:20:05.770 "data_size": 0 00:20:05.770 } 00:20:05.770 ] 00:20:05.770 }' 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.770 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.027 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:06.027 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.027 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.286 [2024-10-07 07:41:05.593863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.286 [2024-10-07 07:41:05.594155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:06.286 [2024-10-07 07:41:05.594181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:06.286 [2024-10-07 07:41:05.594467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:06.286 BaseBdev3 00:20:06.286 [2024-10-07 07:41:05.594604] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:06.286 [2024-10-07 07:41:05.594620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:06.286 [2024-10-07 07:41:05.594798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.286 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.286 [ 00:20:06.286 { 00:20:06.286 "name": "BaseBdev3", 00:20:06.286 "aliases": [ 00:20:06.286 "de14484e-42c6-4cbd-ac0b-70ada2ff42fc" 00:20:06.286 ], 00:20:06.286 "product_name": "Malloc disk", 00:20:06.286 "block_size": 512, 00:20:06.286 "num_blocks": 65536, 00:20:06.286 "uuid": "de14484e-42c6-4cbd-ac0b-70ada2ff42fc", 00:20:06.286 "assigned_rate_limits": { 00:20:06.286 "rw_ios_per_sec": 0, 00:20:06.286 "rw_mbytes_per_sec": 0, 00:20:06.286 "r_mbytes_per_sec": 0, 00:20:06.286 "w_mbytes_per_sec": 0 00:20:06.286 }, 00:20:06.286 "claimed": true, 00:20:06.286 "claim_type": "exclusive_write", 00:20:06.286 "zoned": false, 00:20:06.286 "supported_io_types": { 00:20:06.286 "read": true, 00:20:06.286 "write": true, 00:20:06.286 "unmap": true, 00:20:06.286 "flush": true, 00:20:06.286 "reset": true, 00:20:06.286 "nvme_admin": false, 00:20:06.286 "nvme_io": false, 00:20:06.286 "nvme_io_md": false, 00:20:06.286 "write_zeroes": true, 00:20:06.286 "zcopy": true, 00:20:06.286 "get_zone_info": false, 00:20:06.286 "zone_management": false, 00:20:06.286 "zone_append": false, 00:20:06.286 "compare": false, 00:20:06.287 "compare_and_write": false, 00:20:06.287 "abort": true, 00:20:06.287 "seek_hole": false, 00:20:06.287 "seek_data": false, 00:20:06.287 "copy": true, 00:20:06.287 "nvme_iov_md": false 00:20:06.287 }, 00:20:06.287 "memory_domains": [ 00:20:06.287 { 00:20:06.287 "dma_device_id": "system", 00:20:06.287 "dma_device_type": 1 00:20:06.287 }, 00:20:06.287 { 00:20:06.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.287 "dma_device_type": 2 00:20:06.287 } 00:20:06.287 ], 00:20:06.287 "driver_specific": {} 00:20:06.287 } 00:20:06.287 ] 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.287 "name": "Existed_Raid", 00:20:06.287 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:06.287 "strip_size_kb": 64, 00:20:06.287 "state": "online", 00:20:06.287 "raid_level": "raid0", 00:20:06.287 "superblock": true, 00:20:06.287 "num_base_bdevs": 3, 00:20:06.287 "num_base_bdevs_discovered": 3, 00:20:06.287 "num_base_bdevs_operational": 3, 00:20:06.287 "base_bdevs_list": [ 00:20:06.287 { 00:20:06.287 "name": "BaseBdev1", 00:20:06.287 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:06.287 "is_configured": true, 00:20:06.287 "data_offset": 2048, 00:20:06.287 "data_size": 63488 00:20:06.287 }, 00:20:06.287 { 00:20:06.287 "name": "BaseBdev2", 00:20:06.287 "uuid": "84341e05-df8c-46a8-9a3a-b908b242f880", 00:20:06.287 "is_configured": true, 00:20:06.287 "data_offset": 2048, 00:20:06.287 "data_size": 63488 00:20:06.287 }, 00:20:06.287 { 00:20:06.287 "name": "BaseBdev3", 00:20:06.287 "uuid": "de14484e-42c6-4cbd-ac0b-70ada2ff42fc", 00:20:06.287 "is_configured": true, 00:20:06.287 "data_offset": 2048, 00:20:06.287 "data_size": 63488 00:20:06.287 } 00:20:06.287 ] 00:20:06.287 }' 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.287 07:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.854 [2024-10-07 07:41:06.142306] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.854 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.854 "name": "Existed_Raid", 00:20:06.854 "aliases": [ 00:20:06.854 "832e3a35-7209-4fce-846c-d31bb6f33d74" 00:20:06.854 ], 00:20:06.854 "product_name": "Raid Volume", 00:20:06.854 "block_size": 512, 00:20:06.854 "num_blocks": 190464, 00:20:06.854 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:06.854 "assigned_rate_limits": { 00:20:06.854 "rw_ios_per_sec": 0, 00:20:06.854 "rw_mbytes_per_sec": 0, 00:20:06.854 "r_mbytes_per_sec": 0, 00:20:06.854 "w_mbytes_per_sec": 0 00:20:06.854 }, 00:20:06.854 "claimed": false, 00:20:06.854 "zoned": false, 00:20:06.854 "supported_io_types": { 00:20:06.854 "read": true, 00:20:06.854 "write": true, 00:20:06.854 "unmap": true, 00:20:06.854 "flush": true, 00:20:06.854 "reset": true, 00:20:06.854 "nvme_admin": false, 00:20:06.854 "nvme_io": false, 00:20:06.854 "nvme_io_md": false, 00:20:06.854 "write_zeroes": true, 00:20:06.854 "zcopy": false, 00:20:06.854 "get_zone_info": false, 00:20:06.854 "zone_management": false, 00:20:06.854 "zone_append": false, 00:20:06.854 "compare": false, 00:20:06.854 "compare_and_write": false, 00:20:06.854 "abort": false, 00:20:06.854 "seek_hole": false, 00:20:06.854 "seek_data": false, 00:20:06.854 "copy": false, 00:20:06.854 "nvme_iov_md": false 00:20:06.854 }, 00:20:06.854 "memory_domains": [ 00:20:06.854 { 00:20:06.854 "dma_device_id": "system", 00:20:06.854 "dma_device_type": 1 00:20:06.854 }, 00:20:06.854 { 00:20:06.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.854 "dma_device_type": 2 00:20:06.854 }, 00:20:06.854 { 00:20:06.854 "dma_device_id": "system", 00:20:06.854 "dma_device_type": 1 00:20:06.854 }, 00:20:06.854 { 00:20:06.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.854 "dma_device_type": 2 00:20:06.854 }, 00:20:06.854 { 00:20:06.854 "dma_device_id": "system", 00:20:06.854 "dma_device_type": 1 00:20:06.854 }, 00:20:06.854 { 00:20:06.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.854 "dma_device_type": 2 00:20:06.854 } 00:20:06.854 ], 00:20:06.854 "driver_specific": { 00:20:06.854 "raid": { 00:20:06.854 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:06.854 "strip_size_kb": 64, 00:20:06.854 "state": "online", 00:20:06.854 "raid_level": "raid0", 00:20:06.854 "superblock": true, 00:20:06.854 "num_base_bdevs": 3, 00:20:06.854 "num_base_bdevs_discovered": 3, 00:20:06.855 "num_base_bdevs_operational": 3, 00:20:06.855 "base_bdevs_list": [ 00:20:06.855 { 00:20:06.855 "name": "BaseBdev1", 00:20:06.855 "uuid": "02356041-2eb8-47a8-8152-785e580c6082", 00:20:06.855 "is_configured": true, 00:20:06.855 "data_offset": 2048, 00:20:06.855 "data_size": 63488 00:20:06.855 }, 00:20:06.855 { 00:20:06.855 "name": "BaseBdev2", 00:20:06.855 "uuid": "84341e05-df8c-46a8-9a3a-b908b242f880", 00:20:06.855 "is_configured": true, 00:20:06.855 "data_offset": 2048, 00:20:06.855 "data_size": 63488 00:20:06.855 }, 00:20:06.855 { 00:20:06.855 "name": "BaseBdev3", 00:20:06.855 "uuid": "de14484e-42c6-4cbd-ac0b-70ada2ff42fc", 00:20:06.855 "is_configured": true, 00:20:06.855 "data_offset": 2048, 00:20:06.855 "data_size": 63488 00:20:06.855 } 00:20:06.855 ] 00:20:06.855 } 00:20:06.855 } 00:20:06.855 }' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:06.855 BaseBdev2 00:20:06.855 BaseBdev3' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:06.855 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 [2024-10-07 07:41:06.410068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.855 [2024-10-07 07:41:06.410103] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.855 [2024-10-07 07:41:06.410164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.113 "name": "Existed_Raid", 00:20:07.113 "uuid": "832e3a35-7209-4fce-846c-d31bb6f33d74", 00:20:07.113 "strip_size_kb": 64, 00:20:07.113 "state": "offline", 00:20:07.113 "raid_level": "raid0", 00:20:07.113 "superblock": true, 00:20:07.113 "num_base_bdevs": 3, 00:20:07.113 "num_base_bdevs_discovered": 2, 00:20:07.113 "num_base_bdevs_operational": 2, 00:20:07.113 "base_bdevs_list": [ 00:20:07.113 { 00:20:07.113 "name": null, 00:20:07.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.113 "is_configured": false, 00:20:07.113 "data_offset": 0, 00:20:07.113 "data_size": 63488 00:20:07.113 }, 00:20:07.113 { 00:20:07.113 "name": "BaseBdev2", 00:20:07.113 "uuid": "84341e05-df8c-46a8-9a3a-b908b242f880", 00:20:07.113 "is_configured": true, 00:20:07.113 "data_offset": 2048, 00:20:07.113 "data_size": 63488 00:20:07.113 }, 00:20:07.113 { 00:20:07.113 "name": "BaseBdev3", 00:20:07.113 "uuid": "de14484e-42c6-4cbd-ac0b-70ada2ff42fc", 00:20:07.113 "is_configured": true, 00:20:07.113 "data_offset": 2048, 00:20:07.113 "data_size": 63488 00:20:07.113 } 00:20:07.113 ] 00:20:07.113 }' 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.113 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:07.680 07:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.680 [2024-10-07 07:41:07.034676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.680 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.681 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.681 [2024-10-07 07:41:07.183391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:07.681 [2024-10-07 07:41:07.183633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:07.939 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 BaseBdev2 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 [ 00:20:07.940 { 00:20:07.940 "name": "BaseBdev2", 00:20:07.940 "aliases": [ 00:20:07.940 "20b2002d-a769-4f39-a1e5-ecd211ebdf9d" 00:20:07.940 ], 00:20:07.940 "product_name": "Malloc disk", 00:20:07.940 "block_size": 512, 00:20:07.940 "num_blocks": 65536, 00:20:07.940 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:07.940 "assigned_rate_limits": { 00:20:07.940 "rw_ios_per_sec": 0, 00:20:07.940 "rw_mbytes_per_sec": 0, 00:20:07.940 "r_mbytes_per_sec": 0, 00:20:07.940 "w_mbytes_per_sec": 0 00:20:07.940 }, 00:20:07.940 "claimed": false, 00:20:07.940 "zoned": false, 00:20:07.940 "supported_io_types": { 00:20:07.940 "read": true, 00:20:07.940 "write": true, 00:20:07.940 "unmap": true, 00:20:07.940 "flush": true, 00:20:07.940 "reset": true, 00:20:07.940 "nvme_admin": false, 00:20:07.940 "nvme_io": false, 00:20:07.940 "nvme_io_md": false, 00:20:07.940 "write_zeroes": true, 00:20:07.940 "zcopy": true, 00:20:07.940 "get_zone_info": false, 00:20:07.940 "zone_management": false, 00:20:07.940 "zone_append": false, 00:20:07.940 "compare": false, 00:20:07.940 "compare_and_write": false, 00:20:07.940 "abort": true, 00:20:07.940 "seek_hole": false, 00:20:07.940 "seek_data": false, 00:20:07.940 "copy": true, 00:20:07.940 "nvme_iov_md": false 00:20:07.940 }, 00:20:07.940 "memory_domains": [ 00:20:07.940 { 00:20:07.940 "dma_device_id": "system", 00:20:07.940 "dma_device_type": 1 00:20:07.940 }, 00:20:07.940 { 00:20:07.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.940 "dma_device_type": 2 00:20:07.940 } 00:20:07.940 ], 00:20:07.940 "driver_specific": {} 00:20:07.940 } 00:20:07.940 ] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 BaseBdev3 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:07.940 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 [ 00:20:07.940 { 00:20:07.940 "name": "BaseBdev3", 00:20:07.940 "aliases": [ 00:20:07.940 "58d82f42-53f7-49fc-b8f1-e7487c465f1a" 00:20:07.940 ], 00:20:07.940 "product_name": "Malloc disk", 00:20:07.940 "block_size": 512, 00:20:07.940 "num_blocks": 65536, 00:20:07.940 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:07.940 "assigned_rate_limits": { 00:20:07.940 "rw_ios_per_sec": 0, 00:20:07.940 "rw_mbytes_per_sec": 0, 00:20:07.940 "r_mbytes_per_sec": 0, 00:20:07.940 "w_mbytes_per_sec": 0 00:20:07.940 }, 00:20:07.940 "claimed": false, 00:20:07.940 "zoned": false, 00:20:07.940 "supported_io_types": { 00:20:07.940 "read": true, 00:20:07.940 "write": true, 00:20:07.940 "unmap": true, 00:20:07.940 "flush": true, 00:20:07.940 "reset": true, 00:20:07.940 "nvme_admin": false, 00:20:08.199 "nvme_io": false, 00:20:08.199 "nvme_io_md": false, 00:20:08.199 "write_zeroes": true, 00:20:08.199 "zcopy": true, 00:20:08.199 "get_zone_info": false, 00:20:08.199 "zone_management": false, 00:20:08.199 "zone_append": false, 00:20:08.199 "compare": false, 00:20:08.199 "compare_and_write": false, 00:20:08.199 "abort": true, 00:20:08.199 "seek_hole": false, 00:20:08.199 "seek_data": false, 00:20:08.199 "copy": true, 00:20:08.199 "nvme_iov_md": false 00:20:08.199 }, 00:20:08.199 "memory_domains": [ 00:20:08.199 { 00:20:08.199 "dma_device_id": "system", 00:20:08.199 "dma_device_type": 1 00:20:08.199 }, 00:20:08.199 { 00:20:08.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.199 "dma_device_type": 2 00:20:08.199 } 00:20:08.199 ], 00:20:08.199 "driver_specific": {} 00:20:08.199 } 00:20:08.199 ] 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.199 [2024-10-07 07:41:07.514207] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.199 [2024-10-07 07:41:07.514392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.199 [2024-10-07 07:41:07.514507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.199 [2024-10-07 07:41:07.517008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.199 "name": "Existed_Raid", 00:20:08.199 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:08.199 "strip_size_kb": 64, 00:20:08.199 "state": "configuring", 00:20:08.199 "raid_level": "raid0", 00:20:08.199 "superblock": true, 00:20:08.199 "num_base_bdevs": 3, 00:20:08.199 "num_base_bdevs_discovered": 2, 00:20:08.199 "num_base_bdevs_operational": 3, 00:20:08.199 "base_bdevs_list": [ 00:20:08.199 { 00:20:08.199 "name": "BaseBdev1", 00:20:08.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.199 "is_configured": false, 00:20:08.199 "data_offset": 0, 00:20:08.199 "data_size": 0 00:20:08.199 }, 00:20:08.199 { 00:20:08.199 "name": "BaseBdev2", 00:20:08.199 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:08.199 "is_configured": true, 00:20:08.199 "data_offset": 2048, 00:20:08.199 "data_size": 63488 00:20:08.199 }, 00:20:08.199 { 00:20:08.199 "name": "BaseBdev3", 00:20:08.199 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:08.199 "is_configured": true, 00:20:08.199 "data_offset": 2048, 00:20:08.199 "data_size": 63488 00:20:08.199 } 00:20:08.199 ] 00:20:08.199 }' 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.199 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.458 [2024-10-07 07:41:07.958256] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.458 07:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:08.458 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.458 "name": "Existed_Raid", 00:20:08.458 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:08.458 "strip_size_kb": 64, 00:20:08.458 "state": "configuring", 00:20:08.458 "raid_level": "raid0", 00:20:08.458 "superblock": true, 00:20:08.458 "num_base_bdevs": 3, 00:20:08.458 "num_base_bdevs_discovered": 1, 00:20:08.458 "num_base_bdevs_operational": 3, 00:20:08.458 "base_bdevs_list": [ 00:20:08.458 { 00:20:08.458 "name": "BaseBdev1", 00:20:08.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.458 "is_configured": false, 00:20:08.458 "data_offset": 0, 00:20:08.458 "data_size": 0 00:20:08.458 }, 00:20:08.458 { 00:20:08.458 "name": null, 00:20:08.458 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:08.458 "is_configured": false, 00:20:08.458 "data_offset": 0, 00:20:08.458 "data_size": 63488 00:20:08.458 }, 00:20:08.458 { 00:20:08.458 "name": "BaseBdev3", 00:20:08.458 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:08.458 "is_configured": true, 00:20:08.458 "data_offset": 2048, 00:20:08.458 "data_size": 63488 00:20:08.458 } 00:20:08.458 ] 00:20:08.458 }' 00:20:08.458 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.458 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 [2024-10-07 07:41:08.496538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.024 BaseBdev1 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 [ 00:20:09.024 { 00:20:09.024 "name": "BaseBdev1", 00:20:09.024 "aliases": [ 00:20:09.024 "be0ad613-a94f-44a8-ae27-5272c5149495" 00:20:09.024 ], 00:20:09.024 "product_name": "Malloc disk", 00:20:09.024 "block_size": 512, 00:20:09.024 "num_blocks": 65536, 00:20:09.024 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:09.024 "assigned_rate_limits": { 00:20:09.024 "rw_ios_per_sec": 0, 00:20:09.024 "rw_mbytes_per_sec": 0, 00:20:09.024 "r_mbytes_per_sec": 0, 00:20:09.024 "w_mbytes_per_sec": 0 00:20:09.024 }, 00:20:09.024 "claimed": true, 00:20:09.024 "claim_type": "exclusive_write", 00:20:09.024 "zoned": false, 00:20:09.024 "supported_io_types": { 00:20:09.024 "read": true, 00:20:09.024 "write": true, 00:20:09.024 "unmap": true, 00:20:09.024 "flush": true, 00:20:09.024 "reset": true, 00:20:09.024 "nvme_admin": false, 00:20:09.024 "nvme_io": false, 00:20:09.024 "nvme_io_md": false, 00:20:09.024 "write_zeroes": true, 00:20:09.024 "zcopy": true, 00:20:09.024 "get_zone_info": false, 00:20:09.024 "zone_management": false, 00:20:09.024 "zone_append": false, 00:20:09.024 "compare": false, 00:20:09.024 "compare_and_write": false, 00:20:09.024 "abort": true, 00:20:09.024 "seek_hole": false, 00:20:09.024 "seek_data": false, 00:20:09.024 "copy": true, 00:20:09.024 "nvme_iov_md": false 00:20:09.024 }, 00:20:09.024 "memory_domains": [ 00:20:09.024 { 00:20:09.024 "dma_device_id": "system", 00:20:09.024 "dma_device_type": 1 00:20:09.024 }, 00:20:09.024 { 00:20:09.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.024 "dma_device_type": 2 00:20:09.024 } 00:20:09.024 ], 00:20:09.024 "driver_specific": {} 00:20:09.024 } 00:20:09.024 ] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.024 "name": "Existed_Raid", 00:20:09.024 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:09.024 "strip_size_kb": 64, 00:20:09.024 "state": "configuring", 00:20:09.024 "raid_level": "raid0", 00:20:09.024 "superblock": true, 00:20:09.024 "num_base_bdevs": 3, 00:20:09.024 "num_base_bdevs_discovered": 2, 00:20:09.024 "num_base_bdevs_operational": 3, 00:20:09.024 "base_bdevs_list": [ 00:20:09.024 { 00:20:09.024 "name": "BaseBdev1", 00:20:09.024 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:09.024 "is_configured": true, 00:20:09.024 "data_offset": 2048, 00:20:09.024 "data_size": 63488 00:20:09.024 }, 00:20:09.024 { 00:20:09.024 "name": null, 00:20:09.024 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:09.024 "is_configured": false, 00:20:09.024 "data_offset": 0, 00:20:09.024 "data_size": 63488 00:20:09.024 }, 00:20:09.024 { 00:20:09.024 "name": "BaseBdev3", 00:20:09.024 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:09.024 "is_configured": true, 00:20:09.024 "data_offset": 2048, 00:20:09.024 "data_size": 63488 00:20:09.024 } 00:20:09.024 ] 00:20:09.024 }' 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.024 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.613 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.613 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.613 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.613 07:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:09.613 07:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.613 [2024-10-07 07:41:09.024782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.613 "name": "Existed_Raid", 00:20:09.613 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:09.613 "strip_size_kb": 64, 00:20:09.613 "state": "configuring", 00:20:09.613 "raid_level": "raid0", 00:20:09.613 "superblock": true, 00:20:09.613 "num_base_bdevs": 3, 00:20:09.613 "num_base_bdevs_discovered": 1, 00:20:09.613 "num_base_bdevs_operational": 3, 00:20:09.613 "base_bdevs_list": [ 00:20:09.613 { 00:20:09.613 "name": "BaseBdev1", 00:20:09.613 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:09.613 "is_configured": true, 00:20:09.613 "data_offset": 2048, 00:20:09.613 "data_size": 63488 00:20:09.613 }, 00:20:09.613 { 00:20:09.613 "name": null, 00:20:09.613 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:09.613 "is_configured": false, 00:20:09.613 "data_offset": 0, 00:20:09.613 "data_size": 63488 00:20:09.613 }, 00:20:09.613 { 00:20:09.613 "name": null, 00:20:09.613 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:09.613 "is_configured": false, 00:20:09.613 "data_offset": 0, 00:20:09.613 "data_size": 63488 00:20:09.613 } 00:20:09.613 ] 00:20:09.613 }' 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.613 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.180 [2024-10-07 07:41:09.540949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.180 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.181 "name": "Existed_Raid", 00:20:10.181 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:10.181 "strip_size_kb": 64, 00:20:10.181 "state": "configuring", 00:20:10.181 "raid_level": "raid0", 00:20:10.181 "superblock": true, 00:20:10.181 "num_base_bdevs": 3, 00:20:10.181 "num_base_bdevs_discovered": 2, 00:20:10.181 "num_base_bdevs_operational": 3, 00:20:10.181 "base_bdevs_list": [ 00:20:10.181 { 00:20:10.181 "name": "BaseBdev1", 00:20:10.181 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:10.181 "is_configured": true, 00:20:10.181 "data_offset": 2048, 00:20:10.181 "data_size": 63488 00:20:10.181 }, 00:20:10.181 { 00:20:10.181 "name": null, 00:20:10.181 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:10.181 "is_configured": false, 00:20:10.181 "data_offset": 0, 00:20:10.181 "data_size": 63488 00:20:10.181 }, 00:20:10.181 { 00:20:10.181 "name": "BaseBdev3", 00:20:10.181 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:10.181 "is_configured": true, 00:20:10.181 "data_offset": 2048, 00:20:10.181 "data_size": 63488 00:20:10.181 } 00:20:10.181 ] 00:20:10.181 }' 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.181 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.439 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:10.439 07:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.439 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.439 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.439 07:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.697 [2024-10-07 07:41:10.017183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.697 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.698 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:10.698 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.698 "name": "Existed_Raid", 00:20:10.698 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:10.698 "strip_size_kb": 64, 00:20:10.698 "state": "configuring", 00:20:10.698 "raid_level": "raid0", 00:20:10.698 "superblock": true, 00:20:10.698 "num_base_bdevs": 3, 00:20:10.698 "num_base_bdevs_discovered": 1, 00:20:10.698 "num_base_bdevs_operational": 3, 00:20:10.698 "base_bdevs_list": [ 00:20:10.698 { 00:20:10.698 "name": null, 00:20:10.698 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 63488 00:20:10.698 }, 00:20:10.698 { 00:20:10.698 "name": null, 00:20:10.698 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 63488 00:20:10.698 }, 00:20:10.698 { 00:20:10.698 "name": "BaseBdev3", 00:20:10.698 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:10.698 "is_configured": true, 00:20:10.698 "data_offset": 2048, 00:20:10.698 "data_size": 63488 00:20:10.698 } 00:20:10.698 ] 00:20:10.698 }' 00:20:10.698 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.698 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.264 [2024-10-07 07:41:10.621500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.264 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.265 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.265 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.265 "name": "Existed_Raid", 00:20:11.265 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:11.265 "strip_size_kb": 64, 00:20:11.265 "state": "configuring", 00:20:11.265 "raid_level": "raid0", 00:20:11.265 "superblock": true, 00:20:11.265 "num_base_bdevs": 3, 00:20:11.265 "num_base_bdevs_discovered": 2, 00:20:11.265 "num_base_bdevs_operational": 3, 00:20:11.265 "base_bdevs_list": [ 00:20:11.265 { 00:20:11.265 "name": null, 00:20:11.265 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:11.265 "is_configured": false, 00:20:11.265 "data_offset": 0, 00:20:11.265 "data_size": 63488 00:20:11.265 }, 00:20:11.265 { 00:20:11.265 "name": "BaseBdev2", 00:20:11.265 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:11.265 "is_configured": true, 00:20:11.265 "data_offset": 2048, 00:20:11.265 "data_size": 63488 00:20:11.265 }, 00:20:11.265 { 00:20:11.265 "name": "BaseBdev3", 00:20:11.265 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:11.265 "is_configured": true, 00:20:11.265 "data_offset": 2048, 00:20:11.265 "data_size": 63488 00:20:11.265 } 00:20:11.265 ] 00:20:11.265 }' 00:20:11.265 07:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.265 07:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.523 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:11.523 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.523 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.523 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.523 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be0ad613-a94f-44a8-ae27-5272c5149495 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.782 [2024-10-07 07:41:11.201035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:11.782 [2024-10-07 07:41:11.201313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:11.782 [2024-10-07 07:41:11.201337] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:11.782 [2024-10-07 07:41:11.201644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:11.782 NewBaseBdev 00:20:11.782 [2024-10-07 07:41:11.201835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:11.782 [2024-10-07 07:41:11.201846] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:11.782 [2024-10-07 07:41:11.201985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:11.782 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.783 [ 00:20:11.783 { 00:20:11.783 "name": "NewBaseBdev", 00:20:11.783 "aliases": [ 00:20:11.783 "be0ad613-a94f-44a8-ae27-5272c5149495" 00:20:11.783 ], 00:20:11.783 "product_name": "Malloc disk", 00:20:11.783 "block_size": 512, 00:20:11.783 "num_blocks": 65536, 00:20:11.783 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:11.783 "assigned_rate_limits": { 00:20:11.783 "rw_ios_per_sec": 0, 00:20:11.783 "rw_mbytes_per_sec": 0, 00:20:11.783 "r_mbytes_per_sec": 0, 00:20:11.783 "w_mbytes_per_sec": 0 00:20:11.783 }, 00:20:11.783 "claimed": true, 00:20:11.783 "claim_type": "exclusive_write", 00:20:11.783 "zoned": false, 00:20:11.783 "supported_io_types": { 00:20:11.783 "read": true, 00:20:11.783 "write": true, 00:20:11.783 "unmap": true, 00:20:11.783 "flush": true, 00:20:11.783 "reset": true, 00:20:11.783 "nvme_admin": false, 00:20:11.783 "nvme_io": false, 00:20:11.783 "nvme_io_md": false, 00:20:11.783 "write_zeroes": true, 00:20:11.783 "zcopy": true, 00:20:11.783 "get_zone_info": false, 00:20:11.783 "zone_management": false, 00:20:11.783 "zone_append": false, 00:20:11.783 "compare": false, 00:20:11.783 "compare_and_write": false, 00:20:11.783 "abort": true, 00:20:11.783 "seek_hole": false, 00:20:11.783 "seek_data": false, 00:20:11.783 "copy": true, 00:20:11.783 "nvme_iov_md": false 00:20:11.783 }, 00:20:11.783 "memory_domains": [ 00:20:11.783 { 00:20:11.783 "dma_device_id": "system", 00:20:11.783 "dma_device_type": 1 00:20:11.783 }, 00:20:11.783 { 00:20:11.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.783 "dma_device_type": 2 00:20:11.783 } 00:20:11.783 ], 00:20:11.783 "driver_specific": {} 00:20:11.783 } 00:20:11.783 ] 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.783 "name": "Existed_Raid", 00:20:11.783 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:11.783 "strip_size_kb": 64, 00:20:11.783 "state": "online", 00:20:11.783 "raid_level": "raid0", 00:20:11.783 "superblock": true, 00:20:11.783 "num_base_bdevs": 3, 00:20:11.783 "num_base_bdevs_discovered": 3, 00:20:11.783 "num_base_bdevs_operational": 3, 00:20:11.783 "base_bdevs_list": [ 00:20:11.783 { 00:20:11.783 "name": "NewBaseBdev", 00:20:11.783 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:11.783 "is_configured": true, 00:20:11.783 "data_offset": 2048, 00:20:11.783 "data_size": 63488 00:20:11.783 }, 00:20:11.783 { 00:20:11.783 "name": "BaseBdev2", 00:20:11.783 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:11.783 "is_configured": true, 00:20:11.783 "data_offset": 2048, 00:20:11.783 "data_size": 63488 00:20:11.783 }, 00:20:11.783 { 00:20:11.783 "name": "BaseBdev3", 00:20:11.783 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:11.783 "is_configured": true, 00:20:11.783 "data_offset": 2048, 00:20:11.783 "data_size": 63488 00:20:11.783 } 00:20:11.783 ] 00:20:11.783 }' 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.783 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.349 [2024-10-07 07:41:11.709570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.349 "name": "Existed_Raid", 00:20:12.349 "aliases": [ 00:20:12.349 "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a" 00:20:12.349 ], 00:20:12.349 "product_name": "Raid Volume", 00:20:12.349 "block_size": 512, 00:20:12.349 "num_blocks": 190464, 00:20:12.349 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:12.349 "assigned_rate_limits": { 00:20:12.349 "rw_ios_per_sec": 0, 00:20:12.349 "rw_mbytes_per_sec": 0, 00:20:12.349 "r_mbytes_per_sec": 0, 00:20:12.349 "w_mbytes_per_sec": 0 00:20:12.349 }, 00:20:12.349 "claimed": false, 00:20:12.349 "zoned": false, 00:20:12.349 "supported_io_types": { 00:20:12.349 "read": true, 00:20:12.349 "write": true, 00:20:12.349 "unmap": true, 00:20:12.349 "flush": true, 00:20:12.349 "reset": true, 00:20:12.349 "nvme_admin": false, 00:20:12.349 "nvme_io": false, 00:20:12.349 "nvme_io_md": false, 00:20:12.349 "write_zeroes": true, 00:20:12.349 "zcopy": false, 00:20:12.349 "get_zone_info": false, 00:20:12.349 "zone_management": false, 00:20:12.349 "zone_append": false, 00:20:12.349 "compare": false, 00:20:12.349 "compare_and_write": false, 00:20:12.349 "abort": false, 00:20:12.349 "seek_hole": false, 00:20:12.349 "seek_data": false, 00:20:12.349 "copy": false, 00:20:12.349 "nvme_iov_md": false 00:20:12.349 }, 00:20:12.349 "memory_domains": [ 00:20:12.349 { 00:20:12.349 "dma_device_id": "system", 00:20:12.349 "dma_device_type": 1 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.349 "dma_device_type": 2 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "dma_device_id": "system", 00:20:12.349 "dma_device_type": 1 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.349 "dma_device_type": 2 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "dma_device_id": "system", 00:20:12.349 "dma_device_type": 1 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.349 "dma_device_type": 2 00:20:12.349 } 00:20:12.349 ], 00:20:12.349 "driver_specific": { 00:20:12.349 "raid": { 00:20:12.349 "uuid": "9ecff20e-19ae-49aa-9eb9-cc131ecbde9a", 00:20:12.349 "strip_size_kb": 64, 00:20:12.349 "state": "online", 00:20:12.349 "raid_level": "raid0", 00:20:12.349 "superblock": true, 00:20:12.349 "num_base_bdevs": 3, 00:20:12.349 "num_base_bdevs_discovered": 3, 00:20:12.349 "num_base_bdevs_operational": 3, 00:20:12.349 "base_bdevs_list": [ 00:20:12.349 { 00:20:12.349 "name": "NewBaseBdev", 00:20:12.349 "uuid": "be0ad613-a94f-44a8-ae27-5272c5149495", 00:20:12.349 "is_configured": true, 00:20:12.349 "data_offset": 2048, 00:20:12.349 "data_size": 63488 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "name": "BaseBdev2", 00:20:12.349 "uuid": "20b2002d-a769-4f39-a1e5-ecd211ebdf9d", 00:20:12.349 "is_configured": true, 00:20:12.349 "data_offset": 2048, 00:20:12.349 "data_size": 63488 00:20:12.349 }, 00:20:12.349 { 00:20:12.349 "name": "BaseBdev3", 00:20:12.349 "uuid": "58d82f42-53f7-49fc-b8f1-e7487c465f1a", 00:20:12.349 "is_configured": true, 00:20:12.349 "data_offset": 2048, 00:20:12.349 "data_size": 63488 00:20:12.349 } 00:20:12.349 ] 00:20:12.349 } 00:20:12.349 } 00:20:12.349 }' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:12.349 BaseBdev2 00:20:12.349 BaseBdev3' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.349 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:12.607 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.607 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.607 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.608 07:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.608 [2024-10-07 07:41:12.013294] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.608 [2024-10-07 07:41:12.013333] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.608 [2024-10-07 07:41:12.013462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.608 [2024-10-07 07:41:12.013559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.608 [2024-10-07 07:41:12.013586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64502 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 64502 ']' 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 64502 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 64502 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:12.608 killing process with pid 64502 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 64502' 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 64502 00:20:12.608 [2024-10-07 07:41:12.054207] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.608 07:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 64502 00:20:12.866 [2024-10-07 07:41:12.389299] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.242 07:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:14.242 00:20:14.242 real 0m11.300s 00:20:14.242 user 0m17.894s 00:20:14.242 sys 0m2.020s 00:20:14.242 07:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:14.242 ************************************ 00:20:14.242 END TEST raid_state_function_test_sb 00:20:14.242 ************************************ 00:20:14.242 07:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 07:41:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:20:14.500 07:41:13 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:20:14.500 07:41:13 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:14.500 07:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 ************************************ 00:20:14.500 START TEST raid_superblock_test 00:20:14.500 ************************************ 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid0 3 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65128 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65128 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 65128 ']' 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:14.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:14.500 07:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 [2024-10-07 07:41:13.932491] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:14.500 [2024-10-07 07:41:13.932630] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65128 ] 00:20:14.758 [2024-10-07 07:41:14.101621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.016 [2024-10-07 07:41:14.331333] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.016 [2024-10-07 07:41:14.553763] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.016 [2024-10-07 07:41:14.553838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.581 malloc1 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.581 [2024-10-07 07:41:14.974436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:15.581 [2024-10-07 07:41:14.974646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.581 [2024-10-07 07:41:14.974797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:15.581 [2024-10-07 07:41:14.974908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.581 [2024-10-07 07:41:14.977553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.581 [2024-10-07 07:41:14.977723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:15.581 pt1 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.581 07:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.581 malloc2 00:20:15.581 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.581 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:15.581 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.581 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.581 [2024-10-07 07:41:15.049580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:15.581 [2024-10-07 07:41:15.049859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.581 [2024-10-07 07:41:15.049997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:15.581 [2024-10-07 07:41:15.050089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.582 [2024-10-07 07:41:15.053027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.582 [2024-10-07 07:41:15.053210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:15.582 pt2 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 malloc3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 [2024-10-07 07:41:15.111376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:15.582 [2024-10-07 07:41:15.111443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.582 [2024-10-07 07:41:15.111473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:15.582 [2024-10-07 07:41:15.111487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.582 [2024-10-07 07:41:15.114217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.582 [2024-10-07 07:41:15.114271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:15.582 pt3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.582 [2024-10-07 07:41:15.123466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:15.582 [2024-10-07 07:41:15.125939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:15.582 [2024-10-07 07:41:15.126011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:15.582 [2024-10-07 07:41:15.126188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:15.582 [2024-10-07 07:41:15.126205] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:15.582 [2024-10-07 07:41:15.126510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:15.582 [2024-10-07 07:41:15.126688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:15.582 [2024-10-07 07:41:15.126699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:15.582 [2024-10-07 07:41:15.127172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:15.582 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.839 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:15.839 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.839 "name": "raid_bdev1", 00:20:15.839 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:15.839 "strip_size_kb": 64, 00:20:15.839 "state": "online", 00:20:15.839 "raid_level": "raid0", 00:20:15.839 "superblock": true, 00:20:15.839 "num_base_bdevs": 3, 00:20:15.839 "num_base_bdevs_discovered": 3, 00:20:15.839 "num_base_bdevs_operational": 3, 00:20:15.839 "base_bdevs_list": [ 00:20:15.839 { 00:20:15.839 "name": "pt1", 00:20:15.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.839 "is_configured": true, 00:20:15.839 "data_offset": 2048, 00:20:15.839 "data_size": 63488 00:20:15.839 }, 00:20:15.839 { 00:20:15.839 "name": "pt2", 00:20:15.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.839 "is_configured": true, 00:20:15.839 "data_offset": 2048, 00:20:15.839 "data_size": 63488 00:20:15.839 }, 00:20:15.839 { 00:20:15.839 "name": "pt3", 00:20:15.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:15.839 "is_configured": true, 00:20:15.839 "data_offset": 2048, 00:20:15.839 "data_size": 63488 00:20:15.839 } 00:20:15.839 ] 00:20:15.839 }' 00:20:15.839 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.839 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.098 [2024-10-07 07:41:15.583881] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.098 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:16.098 "name": "raid_bdev1", 00:20:16.098 "aliases": [ 00:20:16.098 "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b" 00:20:16.098 ], 00:20:16.098 "product_name": "Raid Volume", 00:20:16.098 "block_size": 512, 00:20:16.098 "num_blocks": 190464, 00:20:16.098 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:16.098 "assigned_rate_limits": { 00:20:16.098 "rw_ios_per_sec": 0, 00:20:16.098 "rw_mbytes_per_sec": 0, 00:20:16.098 "r_mbytes_per_sec": 0, 00:20:16.098 "w_mbytes_per_sec": 0 00:20:16.098 }, 00:20:16.098 "claimed": false, 00:20:16.098 "zoned": false, 00:20:16.098 "supported_io_types": { 00:20:16.098 "read": true, 00:20:16.098 "write": true, 00:20:16.098 "unmap": true, 00:20:16.098 "flush": true, 00:20:16.098 "reset": true, 00:20:16.098 "nvme_admin": false, 00:20:16.098 "nvme_io": false, 00:20:16.098 "nvme_io_md": false, 00:20:16.098 "write_zeroes": true, 00:20:16.098 "zcopy": false, 00:20:16.098 "get_zone_info": false, 00:20:16.099 "zone_management": false, 00:20:16.099 "zone_append": false, 00:20:16.099 "compare": false, 00:20:16.099 "compare_and_write": false, 00:20:16.099 "abort": false, 00:20:16.099 "seek_hole": false, 00:20:16.099 "seek_data": false, 00:20:16.099 "copy": false, 00:20:16.099 "nvme_iov_md": false 00:20:16.099 }, 00:20:16.099 "memory_domains": [ 00:20:16.099 { 00:20:16.099 "dma_device_id": "system", 00:20:16.099 "dma_device_type": 1 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.099 "dma_device_type": 2 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "system", 00:20:16.099 "dma_device_type": 1 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.099 "dma_device_type": 2 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "system", 00:20:16.099 "dma_device_type": 1 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.099 "dma_device_type": 2 00:20:16.099 } 00:20:16.099 ], 00:20:16.099 "driver_specific": { 00:20:16.099 "raid": { 00:20:16.099 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:16.099 "strip_size_kb": 64, 00:20:16.099 "state": "online", 00:20:16.099 "raid_level": "raid0", 00:20:16.099 "superblock": true, 00:20:16.099 "num_base_bdevs": 3, 00:20:16.099 "num_base_bdevs_discovered": 3, 00:20:16.099 "num_base_bdevs_operational": 3, 00:20:16.099 "base_bdevs_list": [ 00:20:16.099 { 00:20:16.099 "name": "pt1", 00:20:16.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.099 "is_configured": true, 00:20:16.099 "data_offset": 2048, 00:20:16.099 "data_size": 63488 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "name": "pt2", 00:20:16.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.099 "is_configured": true, 00:20:16.099 "data_offset": 2048, 00:20:16.099 "data_size": 63488 00:20:16.099 }, 00:20:16.099 { 00:20:16.099 "name": "pt3", 00:20:16.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.099 "is_configured": true, 00:20:16.099 "data_offset": 2048, 00:20:16.099 "data_size": 63488 00:20:16.099 } 00:20:16.099 ] 00:20:16.099 } 00:20:16.099 } 00:20:16.099 }' 00:20:16.099 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:16.358 pt2 00:20:16.358 pt3' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:16.358 [2024-10-07 07:41:15.851908] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92ccd978-47bc-4cd5-9e68-7e483dc0fa0b 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92ccd978-47bc-4cd5-9e68-7e483dc0fa0b ']' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 [2024-10-07 07:41:15.899599] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.358 [2024-10-07 07:41:15.899637] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.358 [2024-10-07 07:41:15.899741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.358 [2024-10-07 07:41:15.899813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:16.358 [2024-10-07 07:41:15.899827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:16.358 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.359 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.359 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.359 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 [2024-10-07 07:41:16.031656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:16.618 [2024-10-07 07:41:16.034105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:16.618 [2024-10-07 07:41:16.034166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:16.618 [2024-10-07 07:41:16.034227] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:16.618 [2024-10-07 07:41:16.034289] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:16.618 [2024-10-07 07:41:16.034314] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:16.618 [2024-10-07 07:41:16.034339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.618 [2024-10-07 07:41:16.034351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:16.618 request: 00:20:16.618 { 00:20:16.618 "name": "raid_bdev1", 00:20:16.618 "raid_level": "raid0", 00:20:16.618 "base_bdevs": [ 00:20:16.618 "malloc1", 00:20:16.618 "malloc2", 00:20:16.618 "malloc3" 00:20:16.618 ], 00:20:16.618 "strip_size_kb": 64, 00:20:16.618 "superblock": false, 00:20:16.618 "method": "bdev_raid_create", 00:20:16.618 "req_id": 1 00:20:16.618 } 00:20:16.618 Got JSON-RPC error response 00:20:16.618 response: 00:20:16.618 { 00:20:16.618 "code": -17, 00:20:16.618 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:16.618 } 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:16.618 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.619 [2024-10-07 07:41:16.083627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.619 [2024-10-07 07:41:16.084128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.619 [2024-10-07 07:41:16.084254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:16.619 [2024-10-07 07:41:16.084340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.619 [2024-10-07 07:41:16.087145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.619 [2024-10-07 07:41:16.087296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.619 [2024-10-07 07:41:16.087485] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:16.619 [2024-10-07 07:41:16.087631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.619 pt1 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.619 "name": "raid_bdev1", 00:20:16.619 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:16.619 "strip_size_kb": 64, 00:20:16.619 "state": "configuring", 00:20:16.619 "raid_level": "raid0", 00:20:16.619 "superblock": true, 00:20:16.619 "num_base_bdevs": 3, 00:20:16.619 "num_base_bdevs_discovered": 1, 00:20:16.619 "num_base_bdevs_operational": 3, 00:20:16.619 "base_bdevs_list": [ 00:20:16.619 { 00:20:16.619 "name": "pt1", 00:20:16.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.619 "is_configured": true, 00:20:16.619 "data_offset": 2048, 00:20:16.619 "data_size": 63488 00:20:16.619 }, 00:20:16.619 { 00:20:16.619 "name": null, 00:20:16.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.619 "is_configured": false, 00:20:16.619 "data_offset": 2048, 00:20:16.619 "data_size": 63488 00:20:16.619 }, 00:20:16.619 { 00:20:16.619 "name": null, 00:20:16.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.619 "is_configured": false, 00:20:16.619 "data_offset": 2048, 00:20:16.619 "data_size": 63488 00:20:16.619 } 00:20:16.619 ] 00:20:16.619 }' 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.619 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 [2024-10-07 07:41:16.520060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.186 [2024-10-07 07:41:16.520144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.186 [2024-10-07 07:41:16.520175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:17.186 [2024-10-07 07:41:16.520190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.186 [2024-10-07 07:41:16.520669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.186 [2024-10-07 07:41:16.520695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.186 [2024-10-07 07:41:16.520817] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.186 [2024-10-07 07:41:16.520843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.186 pt2 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 [2024-10-07 07:41:16.528071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.186 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.186 "name": "raid_bdev1", 00:20:17.186 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:17.186 "strip_size_kb": 64, 00:20:17.186 "state": "configuring", 00:20:17.186 "raid_level": "raid0", 00:20:17.187 "superblock": true, 00:20:17.187 "num_base_bdevs": 3, 00:20:17.187 "num_base_bdevs_discovered": 1, 00:20:17.187 "num_base_bdevs_operational": 3, 00:20:17.187 "base_bdevs_list": [ 00:20:17.187 { 00:20:17.187 "name": "pt1", 00:20:17.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.187 "is_configured": true, 00:20:17.187 "data_offset": 2048, 00:20:17.187 "data_size": 63488 00:20:17.187 }, 00:20:17.187 { 00:20:17.187 "name": null, 00:20:17.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.187 "is_configured": false, 00:20:17.187 "data_offset": 0, 00:20:17.187 "data_size": 63488 00:20:17.187 }, 00:20:17.187 { 00:20:17.187 "name": null, 00:20:17.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.187 "is_configured": false, 00:20:17.187 "data_offset": 2048, 00:20:17.187 "data_size": 63488 00:20:17.187 } 00:20:17.187 ] 00:20:17.187 }' 00:20:17.187 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.187 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.445 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:17.445 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:17.445 07:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.445 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.445 07:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.445 [2024-10-07 07:41:17.000147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.446 [2024-10-07 07:41:17.000234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.446 [2024-10-07 07:41:17.000257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:17.446 [2024-10-07 07:41:17.000272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.446 [2024-10-07 07:41:17.000795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.446 [2024-10-07 07:41:17.000830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.446 [2024-10-07 07:41:17.000924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.446 [2024-10-07 07:41:17.000963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.704 pt2 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.704 [2024-10-07 07:41:17.012179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:17.704 [2024-10-07 07:41:17.012366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.704 [2024-10-07 07:41:17.012394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:17.704 [2024-10-07 07:41:17.012410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.704 [2024-10-07 07:41:17.012920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.704 [2024-10-07 07:41:17.012958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:17.704 [2024-10-07 07:41:17.013047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:17.704 [2024-10-07 07:41:17.013075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:17.704 [2024-10-07 07:41:17.013207] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:17.704 [2024-10-07 07:41:17.013222] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:17.704 [2024-10-07 07:41:17.013528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:17.704 [2024-10-07 07:41:17.013692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:17.704 [2024-10-07 07:41:17.013703] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:17.704 [2024-10-07 07:41:17.013891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.704 pt3 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.704 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.705 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.705 "name": "raid_bdev1", 00:20:17.705 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:17.705 "strip_size_kb": 64, 00:20:17.705 "state": "online", 00:20:17.705 "raid_level": "raid0", 00:20:17.705 "superblock": true, 00:20:17.705 "num_base_bdevs": 3, 00:20:17.705 "num_base_bdevs_discovered": 3, 00:20:17.705 "num_base_bdevs_operational": 3, 00:20:17.705 "base_bdevs_list": [ 00:20:17.705 { 00:20:17.705 "name": "pt1", 00:20:17.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.705 "is_configured": true, 00:20:17.705 "data_offset": 2048, 00:20:17.705 "data_size": 63488 00:20:17.705 }, 00:20:17.705 { 00:20:17.705 "name": "pt2", 00:20:17.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.705 "is_configured": true, 00:20:17.705 "data_offset": 2048, 00:20:17.705 "data_size": 63488 00:20:17.705 }, 00:20:17.705 { 00:20:17.705 "name": "pt3", 00:20:17.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.705 "is_configured": true, 00:20:17.705 "data_offset": 2048, 00:20:17.705 "data_size": 63488 00:20:17.705 } 00:20:17.705 ] 00:20:17.705 }' 00:20:17.705 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.705 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.964 [2024-10-07 07:41:17.468587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.964 "name": "raid_bdev1", 00:20:17.964 "aliases": [ 00:20:17.964 "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b" 00:20:17.964 ], 00:20:17.964 "product_name": "Raid Volume", 00:20:17.964 "block_size": 512, 00:20:17.964 "num_blocks": 190464, 00:20:17.964 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:17.964 "assigned_rate_limits": { 00:20:17.964 "rw_ios_per_sec": 0, 00:20:17.964 "rw_mbytes_per_sec": 0, 00:20:17.964 "r_mbytes_per_sec": 0, 00:20:17.964 "w_mbytes_per_sec": 0 00:20:17.964 }, 00:20:17.964 "claimed": false, 00:20:17.964 "zoned": false, 00:20:17.964 "supported_io_types": { 00:20:17.964 "read": true, 00:20:17.964 "write": true, 00:20:17.964 "unmap": true, 00:20:17.964 "flush": true, 00:20:17.964 "reset": true, 00:20:17.964 "nvme_admin": false, 00:20:17.964 "nvme_io": false, 00:20:17.964 "nvme_io_md": false, 00:20:17.964 "write_zeroes": true, 00:20:17.964 "zcopy": false, 00:20:17.964 "get_zone_info": false, 00:20:17.964 "zone_management": false, 00:20:17.964 "zone_append": false, 00:20:17.964 "compare": false, 00:20:17.964 "compare_and_write": false, 00:20:17.964 "abort": false, 00:20:17.964 "seek_hole": false, 00:20:17.964 "seek_data": false, 00:20:17.964 "copy": false, 00:20:17.964 "nvme_iov_md": false 00:20:17.964 }, 00:20:17.964 "memory_domains": [ 00:20:17.964 { 00:20:17.964 "dma_device_id": "system", 00:20:17.964 "dma_device_type": 1 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.964 "dma_device_type": 2 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "dma_device_id": "system", 00:20:17.964 "dma_device_type": 1 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.964 "dma_device_type": 2 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "dma_device_id": "system", 00:20:17.964 "dma_device_type": 1 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.964 "dma_device_type": 2 00:20:17.964 } 00:20:17.964 ], 00:20:17.964 "driver_specific": { 00:20:17.964 "raid": { 00:20:17.964 "uuid": "92ccd978-47bc-4cd5-9e68-7e483dc0fa0b", 00:20:17.964 "strip_size_kb": 64, 00:20:17.964 "state": "online", 00:20:17.964 "raid_level": "raid0", 00:20:17.964 "superblock": true, 00:20:17.964 "num_base_bdevs": 3, 00:20:17.964 "num_base_bdevs_discovered": 3, 00:20:17.964 "num_base_bdevs_operational": 3, 00:20:17.964 "base_bdevs_list": [ 00:20:17.964 { 00:20:17.964 "name": "pt1", 00:20:17.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.964 "is_configured": true, 00:20:17.964 "data_offset": 2048, 00:20:17.964 "data_size": 63488 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "name": "pt2", 00:20:17.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.964 "is_configured": true, 00:20:17.964 "data_offset": 2048, 00:20:17.964 "data_size": 63488 00:20:17.964 }, 00:20:17.964 { 00:20:17.964 "name": "pt3", 00:20:17.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.964 "is_configured": true, 00:20:17.964 "data_offset": 2048, 00:20:17.964 "data_size": 63488 00:20:17.964 } 00:20:17.964 ] 00:20:17.964 } 00:20:17.964 } 00:20:17.964 }' 00:20:17.964 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:18.223 pt2 00:20:18.223 pt3' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.223 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:18.223 [2024-10-07 07:41:17.768653] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92ccd978-47bc-4cd5-9e68-7e483dc0fa0b '!=' 92ccd978-47bc-4cd5-9e68-7e483dc0fa0b ']' 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65128 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 65128 ']' 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 65128 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 65128 00:20:18.482 killing process with pid 65128 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 65128' 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 65128 00:20:18.482 [2024-10-07 07:41:17.849409] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.482 07:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 65128 00:20:18.482 [2024-10-07 07:41:17.849518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.482 [2024-10-07 07:41:17.849586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.482 [2024-10-07 07:41:17.849605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:18.740 [2024-10-07 07:41:18.185081] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.136 07:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:20.136 00:20:20.136 real 0m5.787s 00:20:20.136 user 0m8.257s 00:20:20.136 sys 0m0.961s 00:20:20.136 07:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:20.136 07:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.136 ************************************ 00:20:20.136 END TEST raid_superblock_test 00:20:20.136 ************************************ 00:20:20.136 07:41:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:20.136 07:41:19 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:20.136 07:41:19 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:20.136 07:41:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.136 ************************************ 00:20:20.136 START TEST raid_read_error_test 00:20:20.136 ************************************ 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 3 read 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oa1QIUsqJR 00:20:20.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65386 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65386 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 65386 ']' 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:20.136 07:41:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.393 [2024-10-07 07:41:19.779070] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:20.393 [2024-10-07 07:41:19.779436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65386 ] 00:20:20.393 [2024-10-07 07:41:19.948516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.650 [2024-10-07 07:41:20.176576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.907 [2024-10-07 07:41:20.405795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.907 [2024-10-07 07:41:20.406035] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 BaseBdev1_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 true 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 [2024-10-07 07:41:20.797494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:21.471 [2024-10-07 07:41:20.797698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.471 [2024-10-07 07:41:20.797745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:21.471 [2024-10-07 07:41:20.797763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.471 [2024-10-07 07:41:20.800368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.471 [2024-10-07 07:41:20.800414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.471 BaseBdev1 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 BaseBdev2_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 true 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 [2024-10-07 07:41:20.873947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:21.471 [2024-10-07 07:41:20.874011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.471 [2024-10-07 07:41:20.874033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:21.471 [2024-10-07 07:41:20.874048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.471 [2024-10-07 07:41:20.876553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.471 [2024-10-07 07:41:20.876598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:21.471 BaseBdev2 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 BaseBdev3_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.471 true 00:20:21.471 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.472 [2024-10-07 07:41:20.937462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:21.472 [2024-10-07 07:41:20.937519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.472 [2024-10-07 07:41:20.937540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:21.472 [2024-10-07 07:41:20.937554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.472 [2024-10-07 07:41:20.940061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.472 [2024-10-07 07:41:20.940226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:21.472 BaseBdev3 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.472 [2024-10-07 07:41:20.945545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.472 [2024-10-07 07:41:20.947706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.472 [2024-10-07 07:41:20.947798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.472 [2024-10-07 07:41:20.947990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:21.472 [2024-10-07 07:41:20.948003] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:21.472 [2024-10-07 07:41:20.948267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:21.472 [2024-10-07 07:41:20.948428] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:21.472 [2024-10-07 07:41:20.948442] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:21.472 [2024-10-07 07:41:20.948612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.472 "name": "raid_bdev1", 00:20:21.472 "uuid": "4e1f4abe-6f85-494e-94f0-0d9360b8f494", 00:20:21.472 "strip_size_kb": 64, 00:20:21.472 "state": "online", 00:20:21.472 "raid_level": "raid0", 00:20:21.472 "superblock": true, 00:20:21.472 "num_base_bdevs": 3, 00:20:21.472 "num_base_bdevs_discovered": 3, 00:20:21.472 "num_base_bdevs_operational": 3, 00:20:21.472 "base_bdevs_list": [ 00:20:21.472 { 00:20:21.472 "name": "BaseBdev1", 00:20:21.472 "uuid": "bf9099e8-b54f-5bd8-892c-afd4e4d06f39", 00:20:21.472 "is_configured": true, 00:20:21.472 "data_offset": 2048, 00:20:21.472 "data_size": 63488 00:20:21.472 }, 00:20:21.472 { 00:20:21.472 "name": "BaseBdev2", 00:20:21.472 "uuid": "11c16722-8f93-5a33-8370-dd39de421570", 00:20:21.472 "is_configured": true, 00:20:21.472 "data_offset": 2048, 00:20:21.472 "data_size": 63488 00:20:21.472 }, 00:20:21.472 { 00:20:21.472 "name": "BaseBdev3", 00:20:21.472 "uuid": "4a29f638-efcc-5ea3-86bb-54a9c4c3dd03", 00:20:21.472 "is_configured": true, 00:20:21.472 "data_offset": 2048, 00:20:21.472 "data_size": 63488 00:20:21.472 } 00:20:21.472 ] 00:20:21.472 }' 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.472 07:41:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.036 07:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:22.036 07:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:22.036 [2024-10-07 07:41:21.511256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.969 "name": "raid_bdev1", 00:20:22.969 "uuid": "4e1f4abe-6f85-494e-94f0-0d9360b8f494", 00:20:22.969 "strip_size_kb": 64, 00:20:22.969 "state": "online", 00:20:22.969 "raid_level": "raid0", 00:20:22.969 "superblock": true, 00:20:22.969 "num_base_bdevs": 3, 00:20:22.969 "num_base_bdevs_discovered": 3, 00:20:22.969 "num_base_bdevs_operational": 3, 00:20:22.969 "base_bdevs_list": [ 00:20:22.969 { 00:20:22.969 "name": "BaseBdev1", 00:20:22.969 "uuid": "bf9099e8-b54f-5bd8-892c-afd4e4d06f39", 00:20:22.969 "is_configured": true, 00:20:22.969 "data_offset": 2048, 00:20:22.969 "data_size": 63488 00:20:22.969 }, 00:20:22.969 { 00:20:22.969 "name": "BaseBdev2", 00:20:22.969 "uuid": "11c16722-8f93-5a33-8370-dd39de421570", 00:20:22.969 "is_configured": true, 00:20:22.969 "data_offset": 2048, 00:20:22.969 "data_size": 63488 00:20:22.969 }, 00:20:22.969 { 00:20:22.969 "name": "BaseBdev3", 00:20:22.969 "uuid": "4a29f638-efcc-5ea3-86bb-54a9c4c3dd03", 00:20:22.969 "is_configured": true, 00:20:22.969 "data_offset": 2048, 00:20:22.969 "data_size": 63488 00:20:22.969 } 00:20:22.969 ] 00:20:22.969 }' 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.969 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.534 [2024-10-07 07:41:22.846351] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.534 [2024-10-07 07:41:22.846547] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.534 [2024-10-07 07:41:22.849647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.534 [2024-10-07 07:41:22.849694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.534 [2024-10-07 07:41:22.849757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.534 [2024-10-07 07:41:22.849771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:23.534 { 00:20:23.534 "results": [ 00:20:23.534 { 00:20:23.534 "job": "raid_bdev1", 00:20:23.534 "core_mask": "0x1", 00:20:23.534 "workload": "randrw", 00:20:23.534 "percentage": 50, 00:20:23.534 "status": "finished", 00:20:23.534 "queue_depth": 1, 00:20:23.534 "io_size": 131072, 00:20:23.534 "runtime": 1.332968, 00:20:23.534 "iops": 14550.986970429898, 00:20:23.534 "mibps": 1818.8733713037373, 00:20:23.534 "io_failed": 1, 00:20:23.534 "io_timeout": 0, 00:20:23.534 "avg_latency_us": 95.12095900937062, 00:20:23.534 "min_latency_us": 26.33142857142857, 00:20:23.534 "max_latency_us": 1568.182857142857 00:20:23.534 } 00:20:23.534 ], 00:20:23.534 "core_count": 1 00:20:23.534 } 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65386 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 65386 ']' 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 65386 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 65386 00:20:23.534 killing process with pid 65386 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 65386' 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 65386 00:20:23.534 [2024-10-07 07:41:22.889050] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.534 07:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 65386 00:20:23.791 [2024-10-07 07:41:23.141814] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oa1QIUsqJR 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:25.171 ************************************ 00:20:25.171 END TEST raid_read_error_test 00:20:25.171 ************************************ 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:20:25.171 00:20:25.171 real 0m4.951s 00:20:25.171 user 0m5.913s 00:20:25.171 sys 0m0.604s 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:25.171 07:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.171 07:41:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:25.171 07:41:24 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:25.171 07:41:24 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:25.171 07:41:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:25.171 ************************************ 00:20:25.171 START TEST raid_write_error_test 00:20:25.171 ************************************ 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 3 write 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:25.171 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cPmHn0Mrpl 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65532 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65532 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 65532 ']' 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:25.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:25.172 07:41:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.430 [2024-10-07 07:41:24.816231] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:25.430 [2024-10-07 07:41:24.816592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65532 ] 00:20:25.688 [2024-10-07 07:41:25.006403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.945 [2024-10-07 07:41:25.277049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.945 [2024-10-07 07:41:25.500687] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.945 [2024-10-07 07:41:25.502054] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.203 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 BaseBdev1_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 true 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 [2024-10-07 07:41:25.824152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:26.461 [2024-10-07 07:41:25.824352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.461 [2024-10-07 07:41:25.824418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:26.461 [2024-10-07 07:41:25.824526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.461 [2024-10-07 07:41:25.827358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.461 [2024-10-07 07:41:25.827530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:26.461 BaseBdev1 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 BaseBdev2_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 true 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 [2024-10-07 07:41:25.906275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:26.461 [2024-10-07 07:41:25.906341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.461 [2024-10-07 07:41:25.906367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:26.461 [2024-10-07 07:41:25.906384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.461 [2024-10-07 07:41:25.909161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.461 [2024-10-07 07:41:25.909213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:26.461 BaseBdev2 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 BaseBdev3_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 true 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.461 [2024-10-07 07:41:25.978464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:26.461 [2024-10-07 07:41:25.978534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.461 [2024-10-07 07:41:25.978560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:26.461 [2024-10-07 07:41:25.978575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.461 [2024-10-07 07:41:25.981133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.461 [2024-10-07 07:41:25.981309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:26.461 BaseBdev3 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:26.461 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.462 [2024-10-07 07:41:25.990560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.462 [2024-10-07 07:41:25.992931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:26.462 [2024-10-07 07:41:25.993024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:26.462 [2024-10-07 07:41:25.993245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:26.462 [2024-10-07 07:41:25.993261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:26.462 [2024-10-07 07:41:25.993570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:26.462 [2024-10-07 07:41:25.993769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:26.462 [2024-10-07 07:41:25.993787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:26.462 [2024-10-07 07:41:25.993960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.462 07:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.462 07:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.462 07:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:26.462 07:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.462 07:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:26.719 07:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.719 "name": "raid_bdev1", 00:20:26.719 "uuid": "8493c259-46c0-4140-a10c-24de5566c8dc", 00:20:26.719 "strip_size_kb": 64, 00:20:26.719 "state": "online", 00:20:26.719 "raid_level": "raid0", 00:20:26.719 "superblock": true, 00:20:26.719 "num_base_bdevs": 3, 00:20:26.719 "num_base_bdevs_discovered": 3, 00:20:26.719 "num_base_bdevs_operational": 3, 00:20:26.719 "base_bdevs_list": [ 00:20:26.719 { 00:20:26.719 "name": "BaseBdev1", 00:20:26.719 "uuid": "c5902cf3-80ce-5725-9076-3574012df71d", 00:20:26.719 "is_configured": true, 00:20:26.719 "data_offset": 2048, 00:20:26.719 "data_size": 63488 00:20:26.719 }, 00:20:26.719 { 00:20:26.719 "name": "BaseBdev2", 00:20:26.719 "uuid": "23cf38ab-b1b3-5e9c-bd38-7faad4ee3336", 00:20:26.719 "is_configured": true, 00:20:26.719 "data_offset": 2048, 00:20:26.719 "data_size": 63488 00:20:26.719 }, 00:20:26.719 { 00:20:26.719 "name": "BaseBdev3", 00:20:26.719 "uuid": "e076b26d-3f47-5ed2-b888-3db491d21ba9", 00:20:26.720 "is_configured": true, 00:20:26.720 "data_offset": 2048, 00:20:26.720 "data_size": 63488 00:20:26.720 } 00:20:26.720 ] 00:20:26.720 }' 00:20:26.720 07:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.720 07:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.977 07:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:26.977 07:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:26.977 [2024-10-07 07:41:26.524170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.911 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.169 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:28.169 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.169 "name": "raid_bdev1", 00:20:28.169 "uuid": "8493c259-46c0-4140-a10c-24de5566c8dc", 00:20:28.169 "strip_size_kb": 64, 00:20:28.169 "state": "online", 00:20:28.169 "raid_level": "raid0", 00:20:28.169 "superblock": true, 00:20:28.169 "num_base_bdevs": 3, 00:20:28.169 "num_base_bdevs_discovered": 3, 00:20:28.169 "num_base_bdevs_operational": 3, 00:20:28.169 "base_bdevs_list": [ 00:20:28.169 { 00:20:28.169 "name": "BaseBdev1", 00:20:28.169 "uuid": "c5902cf3-80ce-5725-9076-3574012df71d", 00:20:28.169 "is_configured": true, 00:20:28.169 "data_offset": 2048, 00:20:28.169 "data_size": 63488 00:20:28.169 }, 00:20:28.169 { 00:20:28.169 "name": "BaseBdev2", 00:20:28.169 "uuid": "23cf38ab-b1b3-5e9c-bd38-7faad4ee3336", 00:20:28.169 "is_configured": true, 00:20:28.169 "data_offset": 2048, 00:20:28.169 "data_size": 63488 00:20:28.169 }, 00:20:28.169 { 00:20:28.169 "name": "BaseBdev3", 00:20:28.169 "uuid": "e076b26d-3f47-5ed2-b888-3db491d21ba9", 00:20:28.169 "is_configured": true, 00:20:28.169 "data_offset": 2048, 00:20:28.169 "data_size": 63488 00:20:28.169 } 00:20:28.169 ] 00:20:28.169 }' 00:20:28.169 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.169 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.427 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:28.427 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:28.427 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.427 [2024-10-07 07:41:27.936958] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.427 [2024-10-07 07:41:27.937139] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.428 [2024-10-07 07:41:27.940288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.428 [2024-10-07 07:41:27.940463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.428 [2024-10-07 07:41:27.940524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.428 [2024-10-07 07:41:27.940538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:28.428 { 00:20:28.428 "results": [ 00:20:28.428 { 00:20:28.428 "job": "raid_bdev1", 00:20:28.428 "core_mask": "0x1", 00:20:28.428 "workload": "randrw", 00:20:28.428 "percentage": 50, 00:20:28.428 "status": "finished", 00:20:28.428 "queue_depth": 1, 00:20:28.428 "io_size": 131072, 00:20:28.428 "runtime": 1.410731, 00:20:28.428 "iops": 14819.976310154098, 00:20:28.428 "mibps": 1852.4970387692622, 00:20:28.428 "io_failed": 1, 00:20:28.428 "io_timeout": 0, 00:20:28.428 "avg_latency_us": 93.44853881403337, 00:20:28.428 "min_latency_us": 27.916190476190476, 00:20:28.428 "max_latency_us": 2699.4590476190474 00:20:28.428 } 00:20:28.428 ], 00:20:28.428 "core_count": 1 00:20:28.428 } 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65532 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 65532 ']' 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 65532 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 65532 00:20:28.428 killing process with pid 65532 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 65532' 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 65532 00:20:28.428 [2024-10-07 07:41:27.986397] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.428 07:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 65532 00:20:28.686 [2024-10-07 07:41:28.244801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.133 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cPmHn0Mrpl 00:20:30.133 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:30.133 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:20:30.392 ************************************ 00:20:30.392 END TEST raid_write_error_test 00:20:30.392 ************************************ 00:20:30.392 00:20:30.392 real 0m5.028s 00:20:30.392 user 0m5.991s 00:20:30.392 sys 0m0.621s 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:30.392 07:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.392 07:41:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:30.392 07:41:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:30.392 07:41:29 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:30.392 07:41:29 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:30.392 07:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.392 ************************************ 00:20:30.392 START TEST raid_state_function_test 00:20:30.392 ************************************ 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 3 false 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:30.392 Process raid pid: 65681 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:30.392 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65681 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65681' 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65681 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 65681 ']' 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:30.393 07:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.393 [2024-10-07 07:41:29.888848] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:30.393 [2024-10-07 07:41:29.889288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.651 [2024-10-07 07:41:30.081538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.909 [2024-10-07 07:41:30.359088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.167 [2024-10-07 07:41:30.604574] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.167 [2024-10-07 07:41:30.604832] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.425 [2024-10-07 07:41:30.864389] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.425 [2024-10-07 07:41:30.864455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.425 [2024-10-07 07:41:30.864469] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.425 [2024-10-07 07:41:30.864485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.425 [2024-10-07 07:41:30.864494] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.425 [2024-10-07 07:41:30.864508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.425 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.425 "name": "Existed_Raid", 00:20:31.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.425 "strip_size_kb": 64, 00:20:31.425 "state": "configuring", 00:20:31.425 "raid_level": "concat", 00:20:31.425 "superblock": false, 00:20:31.425 "num_base_bdevs": 3, 00:20:31.425 "num_base_bdevs_discovered": 0, 00:20:31.425 "num_base_bdevs_operational": 3, 00:20:31.425 "base_bdevs_list": [ 00:20:31.425 { 00:20:31.425 "name": "BaseBdev1", 00:20:31.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.425 "is_configured": false, 00:20:31.425 "data_offset": 0, 00:20:31.425 "data_size": 0 00:20:31.425 }, 00:20:31.425 { 00:20:31.425 "name": "BaseBdev2", 00:20:31.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.425 "is_configured": false, 00:20:31.425 "data_offset": 0, 00:20:31.425 "data_size": 0 00:20:31.425 }, 00:20:31.425 { 00:20:31.425 "name": "BaseBdev3", 00:20:31.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.425 "is_configured": false, 00:20:31.425 "data_offset": 0, 00:20:31.425 "data_size": 0 00:20:31.425 } 00:20:31.425 ] 00:20:31.426 }' 00:20:31.426 07:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.426 07:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 [2024-10-07 07:41:31.276388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.993 [2024-10-07 07:41:31.276433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 [2024-10-07 07:41:31.288411] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.993 [2024-10-07 07:41:31.288467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.993 [2024-10-07 07:41:31.288479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.993 [2024-10-07 07:41:31.288494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.993 [2024-10-07 07:41:31.288503] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.993 [2024-10-07 07:41:31.288518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 [2024-10-07 07:41:31.348778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.993 BaseBdev1 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.993 [ 00:20:31.993 { 00:20:31.993 "name": "BaseBdev1", 00:20:31.993 "aliases": [ 00:20:31.993 "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58" 00:20:31.993 ], 00:20:31.993 "product_name": "Malloc disk", 00:20:31.993 "block_size": 512, 00:20:31.993 "num_blocks": 65536, 00:20:31.993 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:31.993 "assigned_rate_limits": { 00:20:31.993 "rw_ios_per_sec": 0, 00:20:31.993 "rw_mbytes_per_sec": 0, 00:20:31.993 "r_mbytes_per_sec": 0, 00:20:31.993 "w_mbytes_per_sec": 0 00:20:31.993 }, 00:20:31.993 "claimed": true, 00:20:31.993 "claim_type": "exclusive_write", 00:20:31.993 "zoned": false, 00:20:31.993 "supported_io_types": { 00:20:31.993 "read": true, 00:20:31.993 "write": true, 00:20:31.993 "unmap": true, 00:20:31.993 "flush": true, 00:20:31.993 "reset": true, 00:20:31.993 "nvme_admin": false, 00:20:31.993 "nvme_io": false, 00:20:31.993 "nvme_io_md": false, 00:20:31.993 "write_zeroes": true, 00:20:31.993 "zcopy": true, 00:20:31.993 "get_zone_info": false, 00:20:31.993 "zone_management": false, 00:20:31.993 "zone_append": false, 00:20:31.993 "compare": false, 00:20:31.993 "compare_and_write": false, 00:20:31.993 "abort": true, 00:20:31.993 "seek_hole": false, 00:20:31.993 "seek_data": false, 00:20:31.993 "copy": true, 00:20:31.993 "nvme_iov_md": false 00:20:31.993 }, 00:20:31.993 "memory_domains": [ 00:20:31.993 { 00:20:31.993 "dma_device_id": "system", 00:20:31.993 "dma_device_type": 1 00:20:31.993 }, 00:20:31.993 { 00:20:31.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.993 "dma_device_type": 2 00:20:31.993 } 00:20:31.993 ], 00:20:31.993 "driver_specific": {} 00:20:31.993 } 00:20:31.993 ] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.993 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.994 "name": "Existed_Raid", 00:20:31.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.994 "strip_size_kb": 64, 00:20:31.994 "state": "configuring", 00:20:31.994 "raid_level": "concat", 00:20:31.994 "superblock": false, 00:20:31.994 "num_base_bdevs": 3, 00:20:31.994 "num_base_bdevs_discovered": 1, 00:20:31.994 "num_base_bdevs_operational": 3, 00:20:31.994 "base_bdevs_list": [ 00:20:31.994 { 00:20:31.994 "name": "BaseBdev1", 00:20:31.994 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:31.994 "is_configured": true, 00:20:31.994 "data_offset": 0, 00:20:31.994 "data_size": 65536 00:20:31.994 }, 00:20:31.994 { 00:20:31.994 "name": "BaseBdev2", 00:20:31.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.994 "is_configured": false, 00:20:31.994 "data_offset": 0, 00:20:31.994 "data_size": 0 00:20:31.994 }, 00:20:31.994 { 00:20:31.994 "name": "BaseBdev3", 00:20:31.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.994 "is_configured": false, 00:20:31.994 "data_offset": 0, 00:20:31.994 "data_size": 0 00:20:31.994 } 00:20:31.994 ] 00:20:31.994 }' 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.994 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.252 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:32.252 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.252 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.511 [2024-10-07 07:41:31.812963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.511 [2024-10-07 07:41:31.813164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.511 [2024-10-07 07:41:31.820992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.511 [2024-10-07 07:41:31.823386] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.511 [2024-10-07 07:41:31.823574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.511 [2024-10-07 07:41:31.823598] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.511 [2024-10-07 07:41:31.823614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.511 "name": "Existed_Raid", 00:20:32.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.511 "strip_size_kb": 64, 00:20:32.511 "state": "configuring", 00:20:32.511 "raid_level": "concat", 00:20:32.511 "superblock": false, 00:20:32.511 "num_base_bdevs": 3, 00:20:32.511 "num_base_bdevs_discovered": 1, 00:20:32.511 "num_base_bdevs_operational": 3, 00:20:32.511 "base_bdevs_list": [ 00:20:32.511 { 00:20:32.511 "name": "BaseBdev1", 00:20:32.511 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:32.511 "is_configured": true, 00:20:32.511 "data_offset": 0, 00:20:32.511 "data_size": 65536 00:20:32.511 }, 00:20:32.511 { 00:20:32.511 "name": "BaseBdev2", 00:20:32.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.511 "is_configured": false, 00:20:32.511 "data_offset": 0, 00:20:32.511 "data_size": 0 00:20:32.511 }, 00:20:32.511 { 00:20:32.511 "name": "BaseBdev3", 00:20:32.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.511 "is_configured": false, 00:20:32.511 "data_offset": 0, 00:20:32.511 "data_size": 0 00:20:32.511 } 00:20:32.511 ] 00:20:32.511 }' 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.511 07:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.770 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:32.770 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:32.770 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.029 [2024-10-07 07:41:32.344216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:33.029 BaseBdev2 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.029 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.030 [ 00:20:33.030 { 00:20:33.030 "name": "BaseBdev2", 00:20:33.030 "aliases": [ 00:20:33.030 "c4ddc137-69f0-48e4-9b1a-2c46d0b61240" 00:20:33.030 ], 00:20:33.030 "product_name": "Malloc disk", 00:20:33.030 "block_size": 512, 00:20:33.030 "num_blocks": 65536, 00:20:33.030 "uuid": "c4ddc137-69f0-48e4-9b1a-2c46d0b61240", 00:20:33.030 "assigned_rate_limits": { 00:20:33.030 "rw_ios_per_sec": 0, 00:20:33.030 "rw_mbytes_per_sec": 0, 00:20:33.030 "r_mbytes_per_sec": 0, 00:20:33.030 "w_mbytes_per_sec": 0 00:20:33.030 }, 00:20:33.030 "claimed": true, 00:20:33.030 "claim_type": "exclusive_write", 00:20:33.030 "zoned": false, 00:20:33.030 "supported_io_types": { 00:20:33.030 "read": true, 00:20:33.030 "write": true, 00:20:33.030 "unmap": true, 00:20:33.030 "flush": true, 00:20:33.030 "reset": true, 00:20:33.030 "nvme_admin": false, 00:20:33.030 "nvme_io": false, 00:20:33.030 "nvme_io_md": false, 00:20:33.030 "write_zeroes": true, 00:20:33.030 "zcopy": true, 00:20:33.030 "get_zone_info": false, 00:20:33.030 "zone_management": false, 00:20:33.030 "zone_append": false, 00:20:33.030 "compare": false, 00:20:33.030 "compare_and_write": false, 00:20:33.030 "abort": true, 00:20:33.030 "seek_hole": false, 00:20:33.030 "seek_data": false, 00:20:33.030 "copy": true, 00:20:33.030 "nvme_iov_md": false 00:20:33.030 }, 00:20:33.030 "memory_domains": [ 00:20:33.030 { 00:20:33.030 "dma_device_id": "system", 00:20:33.030 "dma_device_type": 1 00:20:33.030 }, 00:20:33.030 { 00:20:33.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.030 "dma_device_type": 2 00:20:33.030 } 00:20:33.030 ], 00:20:33.030 "driver_specific": {} 00:20:33.030 } 00:20:33.030 ] 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.030 "name": "Existed_Raid", 00:20:33.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.030 "strip_size_kb": 64, 00:20:33.030 "state": "configuring", 00:20:33.030 "raid_level": "concat", 00:20:33.030 "superblock": false, 00:20:33.030 "num_base_bdevs": 3, 00:20:33.030 "num_base_bdevs_discovered": 2, 00:20:33.030 "num_base_bdevs_operational": 3, 00:20:33.030 "base_bdevs_list": [ 00:20:33.030 { 00:20:33.030 "name": "BaseBdev1", 00:20:33.030 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:33.030 "is_configured": true, 00:20:33.030 "data_offset": 0, 00:20:33.030 "data_size": 65536 00:20:33.030 }, 00:20:33.030 { 00:20:33.030 "name": "BaseBdev2", 00:20:33.030 "uuid": "c4ddc137-69f0-48e4-9b1a-2c46d0b61240", 00:20:33.030 "is_configured": true, 00:20:33.030 "data_offset": 0, 00:20:33.030 "data_size": 65536 00:20:33.030 }, 00:20:33.030 { 00:20:33.030 "name": "BaseBdev3", 00:20:33.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.030 "is_configured": false, 00:20:33.030 "data_offset": 0, 00:20:33.030 "data_size": 0 00:20:33.030 } 00:20:33.030 ] 00:20:33.030 }' 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.030 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.289 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:33.289 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.289 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 [2024-10-07 07:41:32.876424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:33.548 [2024-10-07 07:41:32.876486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:33.548 [2024-10-07 07:41:32.876502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:33.548 [2024-10-07 07:41:32.876848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:33.548 [2024-10-07 07:41:32.877028] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:33.548 [2024-10-07 07:41:32.877040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:33.548 [2024-10-07 07:41:32.877367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.548 BaseBdev3 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 [ 00:20:33.548 { 00:20:33.548 "name": "BaseBdev3", 00:20:33.548 "aliases": [ 00:20:33.548 "14330c82-4a41-4b1e-867c-b9d1ff42f19d" 00:20:33.548 ], 00:20:33.548 "product_name": "Malloc disk", 00:20:33.548 "block_size": 512, 00:20:33.548 "num_blocks": 65536, 00:20:33.548 "uuid": "14330c82-4a41-4b1e-867c-b9d1ff42f19d", 00:20:33.548 "assigned_rate_limits": { 00:20:33.548 "rw_ios_per_sec": 0, 00:20:33.548 "rw_mbytes_per_sec": 0, 00:20:33.548 "r_mbytes_per_sec": 0, 00:20:33.548 "w_mbytes_per_sec": 0 00:20:33.548 }, 00:20:33.548 "claimed": true, 00:20:33.548 "claim_type": "exclusive_write", 00:20:33.548 "zoned": false, 00:20:33.548 "supported_io_types": { 00:20:33.548 "read": true, 00:20:33.548 "write": true, 00:20:33.548 "unmap": true, 00:20:33.548 "flush": true, 00:20:33.548 "reset": true, 00:20:33.548 "nvme_admin": false, 00:20:33.548 "nvme_io": false, 00:20:33.548 "nvme_io_md": false, 00:20:33.548 "write_zeroes": true, 00:20:33.548 "zcopy": true, 00:20:33.548 "get_zone_info": false, 00:20:33.548 "zone_management": false, 00:20:33.548 "zone_append": false, 00:20:33.548 "compare": false, 00:20:33.548 "compare_and_write": false, 00:20:33.548 "abort": true, 00:20:33.548 "seek_hole": false, 00:20:33.548 "seek_data": false, 00:20:33.548 "copy": true, 00:20:33.548 "nvme_iov_md": false 00:20:33.548 }, 00:20:33.548 "memory_domains": [ 00:20:33.548 { 00:20:33.548 "dma_device_id": "system", 00:20:33.548 "dma_device_type": 1 00:20:33.548 }, 00:20:33.548 { 00:20:33.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.548 "dma_device_type": 2 00:20:33.548 } 00:20:33.548 ], 00:20:33.548 "driver_specific": {} 00:20:33.548 } 00:20:33.548 ] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.548 "name": "Existed_Raid", 00:20:33.548 "uuid": "f84529fb-3e5b-4ade-9450-8da945d70cd0", 00:20:33.548 "strip_size_kb": 64, 00:20:33.548 "state": "online", 00:20:33.548 "raid_level": "concat", 00:20:33.548 "superblock": false, 00:20:33.548 "num_base_bdevs": 3, 00:20:33.548 "num_base_bdevs_discovered": 3, 00:20:33.548 "num_base_bdevs_operational": 3, 00:20:33.548 "base_bdevs_list": [ 00:20:33.548 { 00:20:33.548 "name": "BaseBdev1", 00:20:33.548 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:33.548 "is_configured": true, 00:20:33.548 "data_offset": 0, 00:20:33.548 "data_size": 65536 00:20:33.548 }, 00:20:33.548 { 00:20:33.548 "name": "BaseBdev2", 00:20:33.548 "uuid": "c4ddc137-69f0-48e4-9b1a-2c46d0b61240", 00:20:33.548 "is_configured": true, 00:20:33.548 "data_offset": 0, 00:20:33.548 "data_size": 65536 00:20:33.548 }, 00:20:33.548 { 00:20:33.548 "name": "BaseBdev3", 00:20:33.548 "uuid": "14330c82-4a41-4b1e-867c-b9d1ff42f19d", 00:20:33.548 "is_configured": true, 00:20:33.548 "data_offset": 0, 00:20:33.548 "data_size": 65536 00:20:33.548 } 00:20:33.548 ] 00:20:33.548 }' 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.548 07:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.116 [2024-10-07 07:41:33.380996] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:34.116 "name": "Existed_Raid", 00:20:34.116 "aliases": [ 00:20:34.116 "f84529fb-3e5b-4ade-9450-8da945d70cd0" 00:20:34.116 ], 00:20:34.116 "product_name": "Raid Volume", 00:20:34.116 "block_size": 512, 00:20:34.116 "num_blocks": 196608, 00:20:34.116 "uuid": "f84529fb-3e5b-4ade-9450-8da945d70cd0", 00:20:34.116 "assigned_rate_limits": { 00:20:34.116 "rw_ios_per_sec": 0, 00:20:34.116 "rw_mbytes_per_sec": 0, 00:20:34.116 "r_mbytes_per_sec": 0, 00:20:34.116 "w_mbytes_per_sec": 0 00:20:34.116 }, 00:20:34.116 "claimed": false, 00:20:34.116 "zoned": false, 00:20:34.116 "supported_io_types": { 00:20:34.116 "read": true, 00:20:34.116 "write": true, 00:20:34.116 "unmap": true, 00:20:34.116 "flush": true, 00:20:34.116 "reset": true, 00:20:34.116 "nvme_admin": false, 00:20:34.116 "nvme_io": false, 00:20:34.116 "nvme_io_md": false, 00:20:34.116 "write_zeroes": true, 00:20:34.116 "zcopy": false, 00:20:34.116 "get_zone_info": false, 00:20:34.116 "zone_management": false, 00:20:34.116 "zone_append": false, 00:20:34.116 "compare": false, 00:20:34.116 "compare_and_write": false, 00:20:34.116 "abort": false, 00:20:34.116 "seek_hole": false, 00:20:34.116 "seek_data": false, 00:20:34.116 "copy": false, 00:20:34.116 "nvme_iov_md": false 00:20:34.116 }, 00:20:34.116 "memory_domains": [ 00:20:34.116 { 00:20:34.116 "dma_device_id": "system", 00:20:34.116 "dma_device_type": 1 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.116 "dma_device_type": 2 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "dma_device_id": "system", 00:20:34.116 "dma_device_type": 1 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.116 "dma_device_type": 2 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "dma_device_id": "system", 00:20:34.116 "dma_device_type": 1 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.116 "dma_device_type": 2 00:20:34.116 } 00:20:34.116 ], 00:20:34.116 "driver_specific": { 00:20:34.116 "raid": { 00:20:34.116 "uuid": "f84529fb-3e5b-4ade-9450-8da945d70cd0", 00:20:34.116 "strip_size_kb": 64, 00:20:34.116 "state": "online", 00:20:34.116 "raid_level": "concat", 00:20:34.116 "superblock": false, 00:20:34.116 "num_base_bdevs": 3, 00:20:34.116 "num_base_bdevs_discovered": 3, 00:20:34.116 "num_base_bdevs_operational": 3, 00:20:34.116 "base_bdevs_list": [ 00:20:34.116 { 00:20:34.116 "name": "BaseBdev1", 00:20:34.116 "uuid": "bcd98bc3-e9f8-4645-bb30-3b6ee8f04d58", 00:20:34.116 "is_configured": true, 00:20:34.116 "data_offset": 0, 00:20:34.116 "data_size": 65536 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "name": "BaseBdev2", 00:20:34.116 "uuid": "c4ddc137-69f0-48e4-9b1a-2c46d0b61240", 00:20:34.116 "is_configured": true, 00:20:34.116 "data_offset": 0, 00:20:34.116 "data_size": 65536 00:20:34.116 }, 00:20:34.116 { 00:20:34.116 "name": "BaseBdev3", 00:20:34.116 "uuid": "14330c82-4a41-4b1e-867c-b9d1ff42f19d", 00:20:34.116 "is_configured": true, 00:20:34.116 "data_offset": 0, 00:20:34.116 "data_size": 65536 00:20:34.116 } 00:20:34.116 ] 00:20:34.116 } 00:20:34.116 } 00:20:34.116 }' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:34.116 BaseBdev2 00:20:34.116 BaseBdev3' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:34.116 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.117 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.117 [2024-10-07 07:41:33.644733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:34.117 [2024-10-07 07:41:33.644894] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.117 [2024-10-07 07:41:33.645003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.375 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.375 "name": "Existed_Raid", 00:20:34.375 "uuid": "f84529fb-3e5b-4ade-9450-8da945d70cd0", 00:20:34.375 "strip_size_kb": 64, 00:20:34.375 "state": "offline", 00:20:34.375 "raid_level": "concat", 00:20:34.375 "superblock": false, 00:20:34.375 "num_base_bdevs": 3, 00:20:34.375 "num_base_bdevs_discovered": 2, 00:20:34.375 "num_base_bdevs_operational": 2, 00:20:34.375 "base_bdevs_list": [ 00:20:34.375 { 00:20:34.375 "name": null, 00:20:34.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.375 "is_configured": false, 00:20:34.375 "data_offset": 0, 00:20:34.375 "data_size": 65536 00:20:34.375 }, 00:20:34.375 { 00:20:34.375 "name": "BaseBdev2", 00:20:34.376 "uuid": "c4ddc137-69f0-48e4-9b1a-2c46d0b61240", 00:20:34.376 "is_configured": true, 00:20:34.376 "data_offset": 0, 00:20:34.376 "data_size": 65536 00:20:34.376 }, 00:20:34.376 { 00:20:34.376 "name": "BaseBdev3", 00:20:34.376 "uuid": "14330c82-4a41-4b1e-867c-b9d1ff42f19d", 00:20:34.376 "is_configured": true, 00:20:34.376 "data_offset": 0, 00:20:34.376 "data_size": 65536 00:20:34.376 } 00:20:34.376 ] 00:20:34.376 }' 00:20:34.376 07:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.376 07:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.634 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.892 [2024-10-07 07:41:34.195796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:34.892 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.892 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:34.892 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:34.893 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.893 [2024-10-07 07:41:34.354663] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:34.893 [2024-10-07 07:41:34.354736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:35.151 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.151 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:35.151 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 BaseBdev2 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 [ 00:20:35.152 { 00:20:35.152 "name": "BaseBdev2", 00:20:35.152 "aliases": [ 00:20:35.152 "32f682e7-de7e-4b40-b245-b19c9c78c258" 00:20:35.152 ], 00:20:35.152 "product_name": "Malloc disk", 00:20:35.152 "block_size": 512, 00:20:35.152 "num_blocks": 65536, 00:20:35.152 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:35.152 "assigned_rate_limits": { 00:20:35.152 "rw_ios_per_sec": 0, 00:20:35.152 "rw_mbytes_per_sec": 0, 00:20:35.152 "r_mbytes_per_sec": 0, 00:20:35.152 "w_mbytes_per_sec": 0 00:20:35.152 }, 00:20:35.152 "claimed": false, 00:20:35.152 "zoned": false, 00:20:35.152 "supported_io_types": { 00:20:35.152 "read": true, 00:20:35.152 "write": true, 00:20:35.152 "unmap": true, 00:20:35.152 "flush": true, 00:20:35.152 "reset": true, 00:20:35.152 "nvme_admin": false, 00:20:35.152 "nvme_io": false, 00:20:35.152 "nvme_io_md": false, 00:20:35.152 "write_zeroes": true, 00:20:35.152 "zcopy": true, 00:20:35.152 "get_zone_info": false, 00:20:35.152 "zone_management": false, 00:20:35.152 "zone_append": false, 00:20:35.152 "compare": false, 00:20:35.152 "compare_and_write": false, 00:20:35.152 "abort": true, 00:20:35.152 "seek_hole": false, 00:20:35.152 "seek_data": false, 00:20:35.152 "copy": true, 00:20:35.152 "nvme_iov_md": false 00:20:35.152 }, 00:20:35.152 "memory_domains": [ 00:20:35.152 { 00:20:35.152 "dma_device_id": "system", 00:20:35.152 "dma_device_type": 1 00:20:35.152 }, 00:20:35.152 { 00:20:35.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.152 "dma_device_type": 2 00:20:35.152 } 00:20:35.152 ], 00:20:35.152 "driver_specific": {} 00:20:35.152 } 00:20:35.152 ] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 BaseBdev3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 [ 00:20:35.152 { 00:20:35.152 "name": "BaseBdev3", 00:20:35.152 "aliases": [ 00:20:35.152 "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d" 00:20:35.152 ], 00:20:35.152 "product_name": "Malloc disk", 00:20:35.152 "block_size": 512, 00:20:35.152 "num_blocks": 65536, 00:20:35.152 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:35.152 "assigned_rate_limits": { 00:20:35.152 "rw_ios_per_sec": 0, 00:20:35.152 "rw_mbytes_per_sec": 0, 00:20:35.152 "r_mbytes_per_sec": 0, 00:20:35.152 "w_mbytes_per_sec": 0 00:20:35.152 }, 00:20:35.152 "claimed": false, 00:20:35.152 "zoned": false, 00:20:35.152 "supported_io_types": { 00:20:35.152 "read": true, 00:20:35.152 "write": true, 00:20:35.152 "unmap": true, 00:20:35.152 "flush": true, 00:20:35.152 "reset": true, 00:20:35.152 "nvme_admin": false, 00:20:35.152 "nvme_io": false, 00:20:35.152 "nvme_io_md": false, 00:20:35.152 "write_zeroes": true, 00:20:35.152 "zcopy": true, 00:20:35.152 "get_zone_info": false, 00:20:35.152 "zone_management": false, 00:20:35.152 "zone_append": false, 00:20:35.152 "compare": false, 00:20:35.152 "compare_and_write": false, 00:20:35.152 "abort": true, 00:20:35.152 "seek_hole": false, 00:20:35.152 "seek_data": false, 00:20:35.152 "copy": true, 00:20:35.152 "nvme_iov_md": false 00:20:35.152 }, 00:20:35.152 "memory_domains": [ 00:20:35.152 { 00:20:35.152 "dma_device_id": "system", 00:20:35.152 "dma_device_type": 1 00:20:35.152 }, 00:20:35.152 { 00:20:35.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.152 "dma_device_type": 2 00:20:35.152 } 00:20:35.152 ], 00:20:35.152 "driver_specific": {} 00:20:35.152 } 00:20:35.152 ] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.152 [2024-10-07 07:41:34.669383] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.152 [2024-10-07 07:41:34.669561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.152 [2024-10-07 07:41:34.669699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.152 [2024-10-07 07:41:34.672121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.152 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.153 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.153 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.153 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.153 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.153 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.411 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.411 "name": "Existed_Raid", 00:20:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.411 "strip_size_kb": 64, 00:20:35.411 "state": "configuring", 00:20:35.411 "raid_level": "concat", 00:20:35.411 "superblock": false, 00:20:35.411 "num_base_bdevs": 3, 00:20:35.411 "num_base_bdevs_discovered": 2, 00:20:35.411 "num_base_bdevs_operational": 3, 00:20:35.411 "base_bdevs_list": [ 00:20:35.411 { 00:20:35.411 "name": "BaseBdev1", 00:20:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.411 "is_configured": false, 00:20:35.411 "data_offset": 0, 00:20:35.411 "data_size": 0 00:20:35.411 }, 00:20:35.411 { 00:20:35.411 "name": "BaseBdev2", 00:20:35.411 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:35.411 "is_configured": true, 00:20:35.411 "data_offset": 0, 00:20:35.411 "data_size": 65536 00:20:35.411 }, 00:20:35.411 { 00:20:35.411 "name": "BaseBdev3", 00:20:35.411 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:35.411 "is_configured": true, 00:20:35.411 "data_offset": 0, 00:20:35.411 "data_size": 65536 00:20:35.411 } 00:20:35.411 ] 00:20:35.411 }' 00:20:35.411 07:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.411 07:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.669 [2024-10-07 07:41:35.125481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.669 "name": "Existed_Raid", 00:20:35.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.669 "strip_size_kb": 64, 00:20:35.669 "state": "configuring", 00:20:35.669 "raid_level": "concat", 00:20:35.669 "superblock": false, 00:20:35.669 "num_base_bdevs": 3, 00:20:35.669 "num_base_bdevs_discovered": 1, 00:20:35.669 "num_base_bdevs_operational": 3, 00:20:35.669 "base_bdevs_list": [ 00:20:35.669 { 00:20:35.669 "name": "BaseBdev1", 00:20:35.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.669 "is_configured": false, 00:20:35.669 "data_offset": 0, 00:20:35.669 "data_size": 0 00:20:35.669 }, 00:20:35.669 { 00:20:35.669 "name": null, 00:20:35.669 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:35.669 "is_configured": false, 00:20:35.669 "data_offset": 0, 00:20:35.669 "data_size": 65536 00:20:35.669 }, 00:20:35.669 { 00:20:35.669 "name": "BaseBdev3", 00:20:35.669 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:35.669 "is_configured": true, 00:20:35.669 "data_offset": 0, 00:20:35.669 "data_size": 65536 00:20:35.669 } 00:20:35.669 ] 00:20:35.669 }' 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.669 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.236 [2024-10-07 07:41:35.647986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.236 BaseBdev1 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.236 [ 00:20:36.236 { 00:20:36.236 "name": "BaseBdev1", 00:20:36.236 "aliases": [ 00:20:36.236 "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d" 00:20:36.236 ], 00:20:36.236 "product_name": "Malloc disk", 00:20:36.236 "block_size": 512, 00:20:36.236 "num_blocks": 65536, 00:20:36.236 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:36.236 "assigned_rate_limits": { 00:20:36.236 "rw_ios_per_sec": 0, 00:20:36.236 "rw_mbytes_per_sec": 0, 00:20:36.236 "r_mbytes_per_sec": 0, 00:20:36.236 "w_mbytes_per_sec": 0 00:20:36.236 }, 00:20:36.236 "claimed": true, 00:20:36.236 "claim_type": "exclusive_write", 00:20:36.236 "zoned": false, 00:20:36.236 "supported_io_types": { 00:20:36.236 "read": true, 00:20:36.236 "write": true, 00:20:36.236 "unmap": true, 00:20:36.236 "flush": true, 00:20:36.236 "reset": true, 00:20:36.236 "nvme_admin": false, 00:20:36.236 "nvme_io": false, 00:20:36.236 "nvme_io_md": false, 00:20:36.236 "write_zeroes": true, 00:20:36.236 "zcopy": true, 00:20:36.236 "get_zone_info": false, 00:20:36.236 "zone_management": false, 00:20:36.236 "zone_append": false, 00:20:36.236 "compare": false, 00:20:36.236 "compare_and_write": false, 00:20:36.236 "abort": true, 00:20:36.236 "seek_hole": false, 00:20:36.236 "seek_data": false, 00:20:36.236 "copy": true, 00:20:36.236 "nvme_iov_md": false 00:20:36.236 }, 00:20:36.236 "memory_domains": [ 00:20:36.236 { 00:20:36.236 "dma_device_id": "system", 00:20:36.236 "dma_device_type": 1 00:20:36.236 }, 00:20:36.236 { 00:20:36.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.236 "dma_device_type": 2 00:20:36.236 } 00:20:36.236 ], 00:20:36.236 "driver_specific": {} 00:20:36.236 } 00:20:36.236 ] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.236 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.237 "name": "Existed_Raid", 00:20:36.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.237 "strip_size_kb": 64, 00:20:36.237 "state": "configuring", 00:20:36.237 "raid_level": "concat", 00:20:36.237 "superblock": false, 00:20:36.237 "num_base_bdevs": 3, 00:20:36.237 "num_base_bdevs_discovered": 2, 00:20:36.237 "num_base_bdevs_operational": 3, 00:20:36.237 "base_bdevs_list": [ 00:20:36.237 { 00:20:36.237 "name": "BaseBdev1", 00:20:36.237 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:36.237 "is_configured": true, 00:20:36.237 "data_offset": 0, 00:20:36.237 "data_size": 65536 00:20:36.237 }, 00:20:36.237 { 00:20:36.237 "name": null, 00:20:36.237 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:36.237 "is_configured": false, 00:20:36.237 "data_offset": 0, 00:20:36.237 "data_size": 65536 00:20:36.237 }, 00:20:36.237 { 00:20:36.237 "name": "BaseBdev3", 00:20:36.237 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:36.237 "is_configured": true, 00:20:36.237 "data_offset": 0, 00:20:36.237 "data_size": 65536 00:20:36.237 } 00:20:36.237 ] 00:20:36.237 }' 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.237 07:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 [2024-10-07 07:41:36.172189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.803 "name": "Existed_Raid", 00:20:36.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.803 "strip_size_kb": 64, 00:20:36.803 "state": "configuring", 00:20:36.803 "raid_level": "concat", 00:20:36.803 "superblock": false, 00:20:36.803 "num_base_bdevs": 3, 00:20:36.803 "num_base_bdevs_discovered": 1, 00:20:36.803 "num_base_bdevs_operational": 3, 00:20:36.803 "base_bdevs_list": [ 00:20:36.803 { 00:20:36.803 "name": "BaseBdev1", 00:20:36.803 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:36.803 "is_configured": true, 00:20:36.803 "data_offset": 0, 00:20:36.803 "data_size": 65536 00:20:36.803 }, 00:20:36.803 { 00:20:36.803 "name": null, 00:20:36.803 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:36.803 "is_configured": false, 00:20:36.803 "data_offset": 0, 00:20:36.803 "data_size": 65536 00:20:36.803 }, 00:20:36.803 { 00:20:36.803 "name": null, 00:20:36.803 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:36.803 "is_configured": false, 00:20:36.803 "data_offset": 0, 00:20:36.803 "data_size": 65536 00:20:36.803 } 00:20:36.803 ] 00:20:36.803 }' 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.803 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.062 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.062 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:37.062 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.062 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.062 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.322 [2024-10-07 07:41:36.628297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.322 "name": "Existed_Raid", 00:20:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.322 "strip_size_kb": 64, 00:20:37.322 "state": "configuring", 00:20:37.322 "raid_level": "concat", 00:20:37.322 "superblock": false, 00:20:37.322 "num_base_bdevs": 3, 00:20:37.322 "num_base_bdevs_discovered": 2, 00:20:37.322 "num_base_bdevs_operational": 3, 00:20:37.322 "base_bdevs_list": [ 00:20:37.322 { 00:20:37.322 "name": "BaseBdev1", 00:20:37.322 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:37.322 "is_configured": true, 00:20:37.322 "data_offset": 0, 00:20:37.322 "data_size": 65536 00:20:37.322 }, 00:20:37.322 { 00:20:37.322 "name": null, 00:20:37.322 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:37.322 "is_configured": false, 00:20:37.322 "data_offset": 0, 00:20:37.322 "data_size": 65536 00:20:37.322 }, 00:20:37.322 { 00:20:37.322 "name": "BaseBdev3", 00:20:37.322 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:37.322 "is_configured": true, 00:20:37.322 "data_offset": 0, 00:20:37.322 "data_size": 65536 00:20:37.322 } 00:20:37.322 ] 00:20:37.322 }' 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.322 07:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.580 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.581 [2024-10-07 07:41:37.116434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.840 "name": "Existed_Raid", 00:20:37.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.840 "strip_size_kb": 64, 00:20:37.840 "state": "configuring", 00:20:37.840 "raid_level": "concat", 00:20:37.840 "superblock": false, 00:20:37.840 "num_base_bdevs": 3, 00:20:37.840 "num_base_bdevs_discovered": 1, 00:20:37.840 "num_base_bdevs_operational": 3, 00:20:37.840 "base_bdevs_list": [ 00:20:37.840 { 00:20:37.840 "name": null, 00:20:37.840 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:37.840 "is_configured": false, 00:20:37.840 "data_offset": 0, 00:20:37.840 "data_size": 65536 00:20:37.840 }, 00:20:37.840 { 00:20:37.840 "name": null, 00:20:37.840 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:37.840 "is_configured": false, 00:20:37.840 "data_offset": 0, 00:20:37.840 "data_size": 65536 00:20:37.840 }, 00:20:37.840 { 00:20:37.840 "name": "BaseBdev3", 00:20:37.840 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:37.840 "is_configured": true, 00:20:37.840 "data_offset": 0, 00:20:37.840 "data_size": 65536 00:20:37.840 } 00:20:37.840 ] 00:20:37.840 }' 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.840 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:38.416 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.417 [2024-10-07 07:41:37.737718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.417 "name": "Existed_Raid", 00:20:38.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.417 "strip_size_kb": 64, 00:20:38.417 "state": "configuring", 00:20:38.417 "raid_level": "concat", 00:20:38.417 "superblock": false, 00:20:38.417 "num_base_bdevs": 3, 00:20:38.417 "num_base_bdevs_discovered": 2, 00:20:38.417 "num_base_bdevs_operational": 3, 00:20:38.417 "base_bdevs_list": [ 00:20:38.417 { 00:20:38.417 "name": null, 00:20:38.417 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:38.417 "is_configured": false, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 65536 00:20:38.417 }, 00:20:38.417 { 00:20:38.417 "name": "BaseBdev2", 00:20:38.417 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:38.417 "is_configured": true, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 65536 00:20:38.417 }, 00:20:38.417 { 00:20:38.417 "name": "BaseBdev3", 00:20:38.417 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:38.417 "is_configured": true, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 65536 00:20:38.417 } 00:20:38.417 ] 00:20:38.417 }' 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.417 07:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.675 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.675 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:38.675 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.675 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 [2024-10-07 07:41:38.329668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:38.932 [2024-10-07 07:41:38.329987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:38.932 [2024-10-07 07:41:38.330018] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:38.932 [2024-10-07 07:41:38.330335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:38.932 [2024-10-07 07:41:38.330497] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:38.932 [2024-10-07 07:41:38.330507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:38.932 [2024-10-07 07:41:38.330824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.932 NewBaseBdev 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.932 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 [ 00:20:38.932 { 00:20:38.932 "name": "NewBaseBdev", 00:20:38.932 "aliases": [ 00:20:38.932 "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d" 00:20:38.932 ], 00:20:38.932 "product_name": "Malloc disk", 00:20:38.932 "block_size": 512, 00:20:38.932 "num_blocks": 65536, 00:20:38.932 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:38.932 "assigned_rate_limits": { 00:20:38.932 "rw_ios_per_sec": 0, 00:20:38.932 "rw_mbytes_per_sec": 0, 00:20:38.932 "r_mbytes_per_sec": 0, 00:20:38.932 "w_mbytes_per_sec": 0 00:20:38.932 }, 00:20:38.932 "claimed": true, 00:20:38.932 "claim_type": "exclusive_write", 00:20:38.932 "zoned": false, 00:20:38.932 "supported_io_types": { 00:20:38.932 "read": true, 00:20:38.932 "write": true, 00:20:38.932 "unmap": true, 00:20:38.932 "flush": true, 00:20:38.932 "reset": true, 00:20:38.932 "nvme_admin": false, 00:20:38.932 "nvme_io": false, 00:20:38.932 "nvme_io_md": false, 00:20:38.932 "write_zeroes": true, 00:20:38.932 "zcopy": true, 00:20:38.932 "get_zone_info": false, 00:20:38.932 "zone_management": false, 00:20:38.932 "zone_append": false, 00:20:38.932 "compare": false, 00:20:38.932 "compare_and_write": false, 00:20:38.932 "abort": true, 00:20:38.932 "seek_hole": false, 00:20:38.933 "seek_data": false, 00:20:38.933 "copy": true, 00:20:38.933 "nvme_iov_md": false 00:20:38.933 }, 00:20:38.933 "memory_domains": [ 00:20:38.933 { 00:20:38.933 "dma_device_id": "system", 00:20:38.933 "dma_device_type": 1 00:20:38.933 }, 00:20:38.933 { 00:20:38.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.933 "dma_device_type": 2 00:20:38.933 } 00:20:38.933 ], 00:20:38.933 "driver_specific": {} 00:20:38.933 } 00:20:38.933 ] 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.933 "name": "Existed_Raid", 00:20:38.933 "uuid": "47fc9ab9-6b21-47ce-b58b-4e97bb2e3501", 00:20:38.933 "strip_size_kb": 64, 00:20:38.933 "state": "online", 00:20:38.933 "raid_level": "concat", 00:20:38.933 "superblock": false, 00:20:38.933 "num_base_bdevs": 3, 00:20:38.933 "num_base_bdevs_discovered": 3, 00:20:38.933 "num_base_bdevs_operational": 3, 00:20:38.933 "base_bdevs_list": [ 00:20:38.933 { 00:20:38.933 "name": "NewBaseBdev", 00:20:38.933 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:38.933 "is_configured": true, 00:20:38.933 "data_offset": 0, 00:20:38.933 "data_size": 65536 00:20:38.933 }, 00:20:38.933 { 00:20:38.933 "name": "BaseBdev2", 00:20:38.933 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:38.933 "is_configured": true, 00:20:38.933 "data_offset": 0, 00:20:38.933 "data_size": 65536 00:20:38.933 }, 00:20:38.933 { 00:20:38.933 "name": "BaseBdev3", 00:20:38.933 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:38.933 "is_configured": true, 00:20:38.933 "data_offset": 0, 00:20:38.933 "data_size": 65536 00:20:38.933 } 00:20:38.933 ] 00:20:38.933 }' 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.933 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:39.499 [2024-10-07 07:41:38.806208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:39.499 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:39.499 "name": "Existed_Raid", 00:20:39.499 "aliases": [ 00:20:39.499 "47fc9ab9-6b21-47ce-b58b-4e97bb2e3501" 00:20:39.499 ], 00:20:39.499 "product_name": "Raid Volume", 00:20:39.499 "block_size": 512, 00:20:39.499 "num_blocks": 196608, 00:20:39.499 "uuid": "47fc9ab9-6b21-47ce-b58b-4e97bb2e3501", 00:20:39.499 "assigned_rate_limits": { 00:20:39.499 "rw_ios_per_sec": 0, 00:20:39.499 "rw_mbytes_per_sec": 0, 00:20:39.499 "r_mbytes_per_sec": 0, 00:20:39.499 "w_mbytes_per_sec": 0 00:20:39.499 }, 00:20:39.499 "claimed": false, 00:20:39.499 "zoned": false, 00:20:39.499 "supported_io_types": { 00:20:39.499 "read": true, 00:20:39.499 "write": true, 00:20:39.499 "unmap": true, 00:20:39.499 "flush": true, 00:20:39.499 "reset": true, 00:20:39.499 "nvme_admin": false, 00:20:39.499 "nvme_io": false, 00:20:39.499 "nvme_io_md": false, 00:20:39.499 "write_zeroes": true, 00:20:39.499 "zcopy": false, 00:20:39.499 "get_zone_info": false, 00:20:39.499 "zone_management": false, 00:20:39.499 "zone_append": false, 00:20:39.499 "compare": false, 00:20:39.499 "compare_and_write": false, 00:20:39.499 "abort": false, 00:20:39.499 "seek_hole": false, 00:20:39.499 "seek_data": false, 00:20:39.499 "copy": false, 00:20:39.499 "nvme_iov_md": false 00:20:39.499 }, 00:20:39.499 "memory_domains": [ 00:20:39.499 { 00:20:39.499 "dma_device_id": "system", 00:20:39.499 "dma_device_type": 1 00:20:39.499 }, 00:20:39.499 { 00:20:39.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.499 "dma_device_type": 2 00:20:39.499 }, 00:20:39.499 { 00:20:39.499 "dma_device_id": "system", 00:20:39.499 "dma_device_type": 1 00:20:39.499 }, 00:20:39.499 { 00:20:39.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.499 "dma_device_type": 2 00:20:39.499 }, 00:20:39.499 { 00:20:39.499 "dma_device_id": "system", 00:20:39.499 "dma_device_type": 1 00:20:39.499 }, 00:20:39.499 { 00:20:39.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.499 "dma_device_type": 2 00:20:39.499 } 00:20:39.499 ], 00:20:39.499 "driver_specific": { 00:20:39.499 "raid": { 00:20:39.500 "uuid": "47fc9ab9-6b21-47ce-b58b-4e97bb2e3501", 00:20:39.500 "strip_size_kb": 64, 00:20:39.500 "state": "online", 00:20:39.500 "raid_level": "concat", 00:20:39.500 "superblock": false, 00:20:39.500 "num_base_bdevs": 3, 00:20:39.500 "num_base_bdevs_discovered": 3, 00:20:39.500 "num_base_bdevs_operational": 3, 00:20:39.500 "base_bdevs_list": [ 00:20:39.500 { 00:20:39.500 "name": "NewBaseBdev", 00:20:39.500 "uuid": "c0fc1cdd-c1fa-449a-ac09-35a13fcd2f9d", 00:20:39.500 "is_configured": true, 00:20:39.500 "data_offset": 0, 00:20:39.500 "data_size": 65536 00:20:39.500 }, 00:20:39.500 { 00:20:39.500 "name": "BaseBdev2", 00:20:39.500 "uuid": "32f682e7-de7e-4b40-b245-b19c9c78c258", 00:20:39.500 "is_configured": true, 00:20:39.500 "data_offset": 0, 00:20:39.500 "data_size": 65536 00:20:39.500 }, 00:20:39.500 { 00:20:39.500 "name": "BaseBdev3", 00:20:39.500 "uuid": "9776c3a7-c9d6-4b49-85b3-b7c8e2fd019d", 00:20:39.500 "is_configured": true, 00:20:39.500 "data_offset": 0, 00:20:39.500 "data_size": 65536 00:20:39.500 } 00:20:39.500 ] 00:20:39.500 } 00:20:39.500 } 00:20:39.500 }' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:39.500 BaseBdev2 00:20:39.500 BaseBdev3' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.500 07:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.500 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.758 [2024-10-07 07:41:39.073914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:39.758 [2024-10-07 07:41:39.074931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.758 [2024-10-07 07:41:39.075062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.758 [2024-10-07 07:41:39.075124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.758 [2024-10-07 07:41:39.075141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65681 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 65681 ']' 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 65681 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 65681 00:20:39.758 killing process with pid 65681 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 65681' 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 65681 00:20:39.758 [2024-10-07 07:41:39.116010] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.758 07:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 65681 00:20:40.016 [2024-10-07 07:41:39.442237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.391 07:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:41.391 00:20:41.391 real 0m11.043s 00:20:41.391 user 0m17.450s 00:20:41.391 sys 0m1.953s 00:20:41.391 07:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.392 ************************************ 00:20:41.392 END TEST raid_state_function_test 00:20:41.392 ************************************ 00:20:41.392 07:41:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:41.392 07:41:40 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:41.392 07:41:40 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:41.392 07:41:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.392 ************************************ 00:20:41.392 START TEST raid_state_function_test_sb 00:20:41.392 ************************************ 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 3 true 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66310 00:20:41.392 Process raid pid: 66310 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66310' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66310 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 66310 ']' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:41.392 07:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.650 [2024-10-07 07:41:40.998698] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:41.650 [2024-10-07 07:41:40.998891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.650 [2024-10-07 07:41:41.183254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.909 [2024-10-07 07:41:41.429061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.169 [2024-10-07 07:41:41.670277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.169 [2024-10-07 07:41:41.670546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.427 [2024-10-07 07:41:41.959490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:42.427 [2024-10-07 07:41:41.959560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:42.427 [2024-10-07 07:41:41.959582] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:42.427 [2024-10-07 07:41:41.959607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:42.427 [2024-10-07 07:41:41.959622] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:42.427 [2024-10-07 07:41:41.959642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.427 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.685 07:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.685 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.685 "name": "Existed_Raid", 00:20:42.685 "uuid": "91cb9e3d-2b67-4e1e-85bd-ea5ce87ff8cc", 00:20:42.685 "strip_size_kb": 64, 00:20:42.685 "state": "configuring", 00:20:42.685 "raid_level": "concat", 00:20:42.685 "superblock": true, 00:20:42.685 "num_base_bdevs": 3, 00:20:42.685 "num_base_bdevs_discovered": 0, 00:20:42.685 "num_base_bdevs_operational": 3, 00:20:42.685 "base_bdevs_list": [ 00:20:42.685 { 00:20:42.685 "name": "BaseBdev1", 00:20:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.685 "is_configured": false, 00:20:42.685 "data_offset": 0, 00:20:42.685 "data_size": 0 00:20:42.685 }, 00:20:42.685 { 00:20:42.685 "name": "BaseBdev2", 00:20:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.685 "is_configured": false, 00:20:42.685 "data_offset": 0, 00:20:42.685 "data_size": 0 00:20:42.685 }, 00:20:42.685 { 00:20:42.685 "name": "BaseBdev3", 00:20:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.685 "is_configured": false, 00:20:42.685 "data_offset": 0, 00:20:42.685 "data_size": 0 00:20:42.685 } 00:20:42.685 ] 00:20:42.685 }' 00:20:42.685 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.685 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 [2024-10-07 07:41:42.383477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.945 [2024-10-07 07:41:42.383523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 [2024-10-07 07:41:42.391522] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:42.945 [2024-10-07 07:41:42.391580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:42.945 [2024-10-07 07:41:42.391592] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:42.945 [2024-10-07 07:41:42.391607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:42.945 [2024-10-07 07:41:42.391616] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:42.945 [2024-10-07 07:41:42.391630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 [2024-10-07 07:41:42.451524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.945 BaseBdev1 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.945 [ 00:20:42.945 { 00:20:42.945 "name": "BaseBdev1", 00:20:42.945 "aliases": [ 00:20:42.945 "6dc8219b-499a-481a-b5e8-ba70f90a4c57" 00:20:42.945 ], 00:20:42.945 "product_name": "Malloc disk", 00:20:42.945 "block_size": 512, 00:20:42.945 "num_blocks": 65536, 00:20:42.945 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:42.945 "assigned_rate_limits": { 00:20:42.945 "rw_ios_per_sec": 0, 00:20:42.945 "rw_mbytes_per_sec": 0, 00:20:42.945 "r_mbytes_per_sec": 0, 00:20:42.945 "w_mbytes_per_sec": 0 00:20:42.945 }, 00:20:42.945 "claimed": true, 00:20:42.945 "claim_type": "exclusive_write", 00:20:42.945 "zoned": false, 00:20:42.945 "supported_io_types": { 00:20:42.945 "read": true, 00:20:42.945 "write": true, 00:20:42.945 "unmap": true, 00:20:42.945 "flush": true, 00:20:42.945 "reset": true, 00:20:42.945 "nvme_admin": false, 00:20:42.945 "nvme_io": false, 00:20:42.945 "nvme_io_md": false, 00:20:42.945 "write_zeroes": true, 00:20:42.945 "zcopy": true, 00:20:42.945 "get_zone_info": false, 00:20:42.945 "zone_management": false, 00:20:42.945 "zone_append": false, 00:20:42.945 "compare": false, 00:20:42.945 "compare_and_write": false, 00:20:42.945 "abort": true, 00:20:42.945 "seek_hole": false, 00:20:42.945 "seek_data": false, 00:20:42.945 "copy": true, 00:20:42.945 "nvme_iov_md": false 00:20:42.945 }, 00:20:42.945 "memory_domains": [ 00:20:42.945 { 00:20:42.945 "dma_device_id": "system", 00:20:42.945 "dma_device_type": 1 00:20:42.945 }, 00:20:42.945 { 00:20:42.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.945 "dma_device_type": 2 00:20:42.945 } 00:20:42.945 ], 00:20:42.945 "driver_specific": {} 00:20:42.945 } 00:20:42.945 ] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:42.945 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.204 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.204 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.204 "name": "Existed_Raid", 00:20:43.204 "uuid": "67660228-74ff-4a35-9ba9-a5272aff4b47", 00:20:43.204 "strip_size_kb": 64, 00:20:43.204 "state": "configuring", 00:20:43.204 "raid_level": "concat", 00:20:43.204 "superblock": true, 00:20:43.204 "num_base_bdevs": 3, 00:20:43.204 "num_base_bdevs_discovered": 1, 00:20:43.204 "num_base_bdevs_operational": 3, 00:20:43.204 "base_bdevs_list": [ 00:20:43.204 { 00:20:43.204 "name": "BaseBdev1", 00:20:43.204 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:43.204 "is_configured": true, 00:20:43.204 "data_offset": 2048, 00:20:43.204 "data_size": 63488 00:20:43.204 }, 00:20:43.204 { 00:20:43.204 "name": "BaseBdev2", 00:20:43.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.204 "is_configured": false, 00:20:43.204 "data_offset": 0, 00:20:43.204 "data_size": 0 00:20:43.204 }, 00:20:43.204 { 00:20:43.204 "name": "BaseBdev3", 00:20:43.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.204 "is_configured": false, 00:20:43.204 "data_offset": 0, 00:20:43.204 "data_size": 0 00:20:43.204 } 00:20:43.204 ] 00:20:43.204 }' 00:20:43.204 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.204 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.462 [2024-10-07 07:41:42.979715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.462 [2024-10-07 07:41:42.980164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.462 [2024-10-07 07:41:42.991815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.462 [2024-10-07 07:41:42.994387] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.462 [2024-10-07 07:41:42.994440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.462 [2024-10-07 07:41:42.994454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.462 [2024-10-07 07:41:42.994469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.462 07:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.462 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.462 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.462 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.462 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.462 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.720 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.720 "name": "Existed_Raid", 00:20:43.720 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:43.720 "strip_size_kb": 64, 00:20:43.720 "state": "configuring", 00:20:43.720 "raid_level": "concat", 00:20:43.720 "superblock": true, 00:20:43.720 "num_base_bdevs": 3, 00:20:43.720 "num_base_bdevs_discovered": 1, 00:20:43.720 "num_base_bdevs_operational": 3, 00:20:43.720 "base_bdevs_list": [ 00:20:43.720 { 00:20:43.720 "name": "BaseBdev1", 00:20:43.720 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:43.720 "is_configured": true, 00:20:43.720 "data_offset": 2048, 00:20:43.720 "data_size": 63488 00:20:43.720 }, 00:20:43.720 { 00:20:43.720 "name": "BaseBdev2", 00:20:43.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.720 "is_configured": false, 00:20:43.720 "data_offset": 0, 00:20:43.720 "data_size": 0 00:20:43.720 }, 00:20:43.720 { 00:20:43.720 "name": "BaseBdev3", 00:20:43.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.720 "is_configured": false, 00:20:43.720 "data_offset": 0, 00:20:43.720 "data_size": 0 00:20:43.720 } 00:20:43.720 ] 00:20:43.720 }' 00:20:43.720 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.720 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.978 [2024-10-07 07:41:43.498001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.978 BaseBdev2 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:43.978 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.978 [ 00:20:43.978 { 00:20:43.978 "name": "BaseBdev2", 00:20:43.978 "aliases": [ 00:20:43.978 "175a4838-13fe-4605-aa9a-24617c9fa67f" 00:20:43.978 ], 00:20:43.978 "product_name": "Malloc disk", 00:20:43.978 "block_size": 512, 00:20:43.978 "num_blocks": 65536, 00:20:43.978 "uuid": "175a4838-13fe-4605-aa9a-24617c9fa67f", 00:20:43.978 "assigned_rate_limits": { 00:20:43.978 "rw_ios_per_sec": 0, 00:20:43.978 "rw_mbytes_per_sec": 0, 00:20:43.978 "r_mbytes_per_sec": 0, 00:20:43.978 "w_mbytes_per_sec": 0 00:20:43.978 }, 00:20:43.978 "claimed": true, 00:20:43.978 "claim_type": "exclusive_write", 00:20:43.978 "zoned": false, 00:20:43.978 "supported_io_types": { 00:20:43.978 "read": true, 00:20:43.978 "write": true, 00:20:43.978 "unmap": true, 00:20:43.978 "flush": true, 00:20:43.978 "reset": true, 00:20:43.978 "nvme_admin": false, 00:20:43.978 "nvme_io": false, 00:20:43.978 "nvme_io_md": false, 00:20:43.978 "write_zeroes": true, 00:20:43.978 "zcopy": true, 00:20:43.978 "get_zone_info": false, 00:20:43.978 "zone_management": false, 00:20:43.978 "zone_append": false, 00:20:43.978 "compare": false, 00:20:43.978 "compare_and_write": false, 00:20:43.978 "abort": true, 00:20:43.978 "seek_hole": false, 00:20:43.978 "seek_data": false, 00:20:44.236 "copy": true, 00:20:44.236 "nvme_iov_md": false 00:20:44.236 }, 00:20:44.236 "memory_domains": [ 00:20:44.236 { 00:20:44.236 "dma_device_id": "system", 00:20:44.236 "dma_device_type": 1 00:20:44.236 }, 00:20:44.236 { 00:20:44.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.236 "dma_device_type": 2 00:20:44.236 } 00:20:44.236 ], 00:20:44.236 "driver_specific": {} 00:20:44.236 } 00:20:44.236 ] 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.236 "name": "Existed_Raid", 00:20:44.236 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:44.236 "strip_size_kb": 64, 00:20:44.236 "state": "configuring", 00:20:44.236 "raid_level": "concat", 00:20:44.236 "superblock": true, 00:20:44.236 "num_base_bdevs": 3, 00:20:44.236 "num_base_bdevs_discovered": 2, 00:20:44.236 "num_base_bdevs_operational": 3, 00:20:44.236 "base_bdevs_list": [ 00:20:44.236 { 00:20:44.236 "name": "BaseBdev1", 00:20:44.236 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:44.236 "is_configured": true, 00:20:44.236 "data_offset": 2048, 00:20:44.236 "data_size": 63488 00:20:44.236 }, 00:20:44.236 { 00:20:44.236 "name": "BaseBdev2", 00:20:44.236 "uuid": "175a4838-13fe-4605-aa9a-24617c9fa67f", 00:20:44.236 "is_configured": true, 00:20:44.236 "data_offset": 2048, 00:20:44.236 "data_size": 63488 00:20:44.236 }, 00:20:44.236 { 00:20:44.236 "name": "BaseBdev3", 00:20:44.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.236 "is_configured": false, 00:20:44.236 "data_offset": 0, 00:20:44.236 "data_size": 0 00:20:44.236 } 00:20:44.236 ] 00:20:44.236 }' 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.236 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 07:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.495 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.495 07:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 [2024-10-07 07:41:44.041330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:44.495 [2024-10-07 07:41:44.041918] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:44.495 [2024-10-07 07:41:44.041956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:44.495 BaseBdev3 00:20:44.495 [2024-10-07 07:41:44.042247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:44.495 [2024-10-07 07:41:44.042404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:44.495 [2024-10-07 07:41:44.042415] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:44.495 [2024-10-07 07:41:44.042597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.753 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:44.753 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.753 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.753 [ 00:20:44.753 { 00:20:44.753 "name": "BaseBdev3", 00:20:44.753 "aliases": [ 00:20:44.753 "a15c69ce-5e6e-4484-96a0-eacc5852cf9e" 00:20:44.753 ], 00:20:44.753 "product_name": "Malloc disk", 00:20:44.753 "block_size": 512, 00:20:44.753 "num_blocks": 65536, 00:20:44.753 "uuid": "a15c69ce-5e6e-4484-96a0-eacc5852cf9e", 00:20:44.753 "assigned_rate_limits": { 00:20:44.753 "rw_ios_per_sec": 0, 00:20:44.753 "rw_mbytes_per_sec": 0, 00:20:44.753 "r_mbytes_per_sec": 0, 00:20:44.753 "w_mbytes_per_sec": 0 00:20:44.753 }, 00:20:44.753 "claimed": true, 00:20:44.753 "claim_type": "exclusive_write", 00:20:44.753 "zoned": false, 00:20:44.753 "supported_io_types": { 00:20:44.753 "read": true, 00:20:44.753 "write": true, 00:20:44.753 "unmap": true, 00:20:44.753 "flush": true, 00:20:44.753 "reset": true, 00:20:44.753 "nvme_admin": false, 00:20:44.753 "nvme_io": false, 00:20:44.753 "nvme_io_md": false, 00:20:44.753 "write_zeroes": true, 00:20:44.753 "zcopy": true, 00:20:44.753 "get_zone_info": false, 00:20:44.753 "zone_management": false, 00:20:44.753 "zone_append": false, 00:20:44.753 "compare": false, 00:20:44.753 "compare_and_write": false, 00:20:44.753 "abort": true, 00:20:44.753 "seek_hole": false, 00:20:44.753 "seek_data": false, 00:20:44.753 "copy": true, 00:20:44.754 "nvme_iov_md": false 00:20:44.754 }, 00:20:44.754 "memory_domains": [ 00:20:44.754 { 00:20:44.754 "dma_device_id": "system", 00:20:44.754 "dma_device_type": 1 00:20:44.754 }, 00:20:44.754 { 00:20:44.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.754 "dma_device_type": 2 00:20:44.754 } 00:20:44.754 ], 00:20:44.754 "driver_specific": {} 00:20:44.754 } 00:20:44.754 ] 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.754 "name": "Existed_Raid", 00:20:44.754 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:44.754 "strip_size_kb": 64, 00:20:44.754 "state": "online", 00:20:44.754 "raid_level": "concat", 00:20:44.754 "superblock": true, 00:20:44.754 "num_base_bdevs": 3, 00:20:44.754 "num_base_bdevs_discovered": 3, 00:20:44.754 "num_base_bdevs_operational": 3, 00:20:44.754 "base_bdevs_list": [ 00:20:44.754 { 00:20:44.754 "name": "BaseBdev1", 00:20:44.754 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:44.754 "is_configured": true, 00:20:44.754 "data_offset": 2048, 00:20:44.754 "data_size": 63488 00:20:44.754 }, 00:20:44.754 { 00:20:44.754 "name": "BaseBdev2", 00:20:44.754 "uuid": "175a4838-13fe-4605-aa9a-24617c9fa67f", 00:20:44.754 "is_configured": true, 00:20:44.754 "data_offset": 2048, 00:20:44.754 "data_size": 63488 00:20:44.754 }, 00:20:44.754 { 00:20:44.754 "name": "BaseBdev3", 00:20:44.754 "uuid": "a15c69ce-5e6e-4484-96a0-eacc5852cf9e", 00:20:44.754 "is_configured": true, 00:20:44.754 "data_offset": 2048, 00:20:44.754 "data_size": 63488 00:20:44.754 } 00:20:44.754 ] 00:20:44.754 }' 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.754 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:45.015 [2024-10-07 07:41:44.541910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.015 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.275 "name": "Existed_Raid", 00:20:45.275 "aliases": [ 00:20:45.275 "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4" 00:20:45.275 ], 00:20:45.275 "product_name": "Raid Volume", 00:20:45.275 "block_size": 512, 00:20:45.275 "num_blocks": 190464, 00:20:45.275 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:45.275 "assigned_rate_limits": { 00:20:45.275 "rw_ios_per_sec": 0, 00:20:45.275 "rw_mbytes_per_sec": 0, 00:20:45.275 "r_mbytes_per_sec": 0, 00:20:45.275 "w_mbytes_per_sec": 0 00:20:45.275 }, 00:20:45.275 "claimed": false, 00:20:45.275 "zoned": false, 00:20:45.275 "supported_io_types": { 00:20:45.275 "read": true, 00:20:45.275 "write": true, 00:20:45.275 "unmap": true, 00:20:45.275 "flush": true, 00:20:45.275 "reset": true, 00:20:45.275 "nvme_admin": false, 00:20:45.275 "nvme_io": false, 00:20:45.275 "nvme_io_md": false, 00:20:45.275 "write_zeroes": true, 00:20:45.275 "zcopy": false, 00:20:45.275 "get_zone_info": false, 00:20:45.275 "zone_management": false, 00:20:45.275 "zone_append": false, 00:20:45.275 "compare": false, 00:20:45.275 "compare_and_write": false, 00:20:45.275 "abort": false, 00:20:45.275 "seek_hole": false, 00:20:45.275 "seek_data": false, 00:20:45.275 "copy": false, 00:20:45.275 "nvme_iov_md": false 00:20:45.275 }, 00:20:45.275 "memory_domains": [ 00:20:45.275 { 00:20:45.275 "dma_device_id": "system", 00:20:45.275 "dma_device_type": 1 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.275 "dma_device_type": 2 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "dma_device_id": "system", 00:20:45.275 "dma_device_type": 1 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.275 "dma_device_type": 2 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "dma_device_id": "system", 00:20:45.275 "dma_device_type": 1 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.275 "dma_device_type": 2 00:20:45.275 } 00:20:45.275 ], 00:20:45.275 "driver_specific": { 00:20:45.275 "raid": { 00:20:45.275 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:45.275 "strip_size_kb": 64, 00:20:45.275 "state": "online", 00:20:45.275 "raid_level": "concat", 00:20:45.275 "superblock": true, 00:20:45.275 "num_base_bdevs": 3, 00:20:45.275 "num_base_bdevs_discovered": 3, 00:20:45.275 "num_base_bdevs_operational": 3, 00:20:45.275 "base_bdevs_list": [ 00:20:45.275 { 00:20:45.275 "name": "BaseBdev1", 00:20:45.275 "uuid": "6dc8219b-499a-481a-b5e8-ba70f90a4c57", 00:20:45.275 "is_configured": true, 00:20:45.275 "data_offset": 2048, 00:20:45.275 "data_size": 63488 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "name": "BaseBdev2", 00:20:45.275 "uuid": "175a4838-13fe-4605-aa9a-24617c9fa67f", 00:20:45.275 "is_configured": true, 00:20:45.275 "data_offset": 2048, 00:20:45.275 "data_size": 63488 00:20:45.275 }, 00:20:45.275 { 00:20:45.275 "name": "BaseBdev3", 00:20:45.275 "uuid": "a15c69ce-5e6e-4484-96a0-eacc5852cf9e", 00:20:45.275 "is_configured": true, 00:20:45.275 "data_offset": 2048, 00:20:45.275 "data_size": 63488 00:20:45.275 } 00:20:45.275 ] 00:20:45.275 } 00:20:45.275 } 00:20:45.275 }' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:45.275 BaseBdev2 00:20:45.275 BaseBdev3' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.275 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.276 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.276 [2024-10-07 07:41:44.825630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.276 [2024-10-07 07:41:44.825828] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.276 [2024-10-07 07:41:44.826012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.534 07:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:45.534 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.534 "name": "Existed_Raid", 00:20:45.534 "uuid": "c31b8ecd-d3dc-4b58-8ff6-3fcd57e9dae4", 00:20:45.534 "strip_size_kb": 64, 00:20:45.534 "state": "offline", 00:20:45.534 "raid_level": "concat", 00:20:45.534 "superblock": true, 00:20:45.534 "num_base_bdevs": 3, 00:20:45.534 "num_base_bdevs_discovered": 2, 00:20:45.534 "num_base_bdevs_operational": 2, 00:20:45.534 "base_bdevs_list": [ 00:20:45.534 { 00:20:45.534 "name": null, 00:20:45.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.534 "is_configured": false, 00:20:45.534 "data_offset": 0, 00:20:45.534 "data_size": 63488 00:20:45.534 }, 00:20:45.534 { 00:20:45.534 "name": "BaseBdev2", 00:20:45.534 "uuid": "175a4838-13fe-4605-aa9a-24617c9fa67f", 00:20:45.534 "is_configured": true, 00:20:45.534 "data_offset": 2048, 00:20:45.534 "data_size": 63488 00:20:45.534 }, 00:20:45.534 { 00:20:45.534 "name": "BaseBdev3", 00:20:45.534 "uuid": "a15c69ce-5e6e-4484-96a0-eacc5852cf9e", 00:20:45.534 "is_configured": true, 00:20:45.534 "data_offset": 2048, 00:20:45.534 "data_size": 63488 00:20:45.534 } 00:20:45.534 ] 00:20:45.534 }' 00:20:45.534 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.534 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.101 [2024-10-07 07:41:45.459598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.101 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.101 [2024-10-07 07:41:45.647774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:46.101 [2024-10-07 07:41:45.647830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.359 BaseBdev2 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.359 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.359 [ 00:20:46.359 { 00:20:46.359 "name": "BaseBdev2", 00:20:46.359 "aliases": [ 00:20:46.359 "9b0b1146-1d27-4902-a5ff-cab860183db3" 00:20:46.359 ], 00:20:46.359 "product_name": "Malloc disk", 00:20:46.359 "block_size": 512, 00:20:46.359 "num_blocks": 65536, 00:20:46.359 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:46.359 "assigned_rate_limits": { 00:20:46.359 "rw_ios_per_sec": 0, 00:20:46.359 "rw_mbytes_per_sec": 0, 00:20:46.359 "r_mbytes_per_sec": 0, 00:20:46.359 "w_mbytes_per_sec": 0 00:20:46.359 }, 00:20:46.359 "claimed": false, 00:20:46.359 "zoned": false, 00:20:46.359 "supported_io_types": { 00:20:46.359 "read": true, 00:20:46.359 "write": true, 00:20:46.359 "unmap": true, 00:20:46.359 "flush": true, 00:20:46.359 "reset": true, 00:20:46.359 "nvme_admin": false, 00:20:46.359 "nvme_io": false, 00:20:46.359 "nvme_io_md": false, 00:20:46.359 "write_zeroes": true, 00:20:46.359 "zcopy": true, 00:20:46.359 "get_zone_info": false, 00:20:46.359 "zone_management": false, 00:20:46.359 "zone_append": false, 00:20:46.359 "compare": false, 00:20:46.359 "compare_and_write": false, 00:20:46.359 "abort": true, 00:20:46.359 "seek_hole": false, 00:20:46.359 "seek_data": false, 00:20:46.359 "copy": true, 00:20:46.359 "nvme_iov_md": false 00:20:46.359 }, 00:20:46.359 "memory_domains": [ 00:20:46.359 { 00:20:46.359 "dma_device_id": "system", 00:20:46.359 "dma_device_type": 1 00:20:46.359 }, 00:20:46.359 { 00:20:46.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.360 "dma_device_type": 2 00:20:46.360 } 00:20:46.360 ], 00:20:46.360 "driver_specific": {} 00:20:46.360 } 00:20:46.360 ] 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.360 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.618 BaseBdev3 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.618 [ 00:20:46.618 { 00:20:46.618 "name": "BaseBdev3", 00:20:46.618 "aliases": [ 00:20:46.618 "beafada0-9017-4265-a6a6-6219c24e7dff" 00:20:46.618 ], 00:20:46.618 "product_name": "Malloc disk", 00:20:46.618 "block_size": 512, 00:20:46.618 "num_blocks": 65536, 00:20:46.618 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:46.618 "assigned_rate_limits": { 00:20:46.618 "rw_ios_per_sec": 0, 00:20:46.618 "rw_mbytes_per_sec": 0, 00:20:46.618 "r_mbytes_per_sec": 0, 00:20:46.618 "w_mbytes_per_sec": 0 00:20:46.618 }, 00:20:46.618 "claimed": false, 00:20:46.618 "zoned": false, 00:20:46.618 "supported_io_types": { 00:20:46.618 "read": true, 00:20:46.618 "write": true, 00:20:46.618 "unmap": true, 00:20:46.618 "flush": true, 00:20:46.618 "reset": true, 00:20:46.618 "nvme_admin": false, 00:20:46.618 "nvme_io": false, 00:20:46.618 "nvme_io_md": false, 00:20:46.618 "write_zeroes": true, 00:20:46.618 "zcopy": true, 00:20:46.618 "get_zone_info": false, 00:20:46.618 "zone_management": false, 00:20:46.618 "zone_append": false, 00:20:46.618 "compare": false, 00:20:46.618 "compare_and_write": false, 00:20:46.618 "abort": true, 00:20:46.618 "seek_hole": false, 00:20:46.618 "seek_data": false, 00:20:46.618 "copy": true, 00:20:46.618 "nvme_iov_md": false 00:20:46.618 }, 00:20:46.618 "memory_domains": [ 00:20:46.618 { 00:20:46.618 "dma_device_id": "system", 00:20:46.618 "dma_device_type": 1 00:20:46.618 }, 00:20:46.618 { 00:20:46.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.618 "dma_device_type": 2 00:20:46.618 } 00:20:46.618 ], 00:20:46.618 "driver_specific": {} 00:20:46.618 } 00:20:46.618 ] 00:20:46.618 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.619 [2024-10-07 07:41:45.996102] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.619 [2024-10-07 07:41:45.996161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.619 [2024-10-07 07:41:45.996191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.619 [2024-10-07 07:41:45.998639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.619 07:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.619 "name": "Existed_Raid", 00:20:46.619 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:46.619 "strip_size_kb": 64, 00:20:46.619 "state": "configuring", 00:20:46.619 "raid_level": "concat", 00:20:46.619 "superblock": true, 00:20:46.619 "num_base_bdevs": 3, 00:20:46.619 "num_base_bdevs_discovered": 2, 00:20:46.619 "num_base_bdevs_operational": 3, 00:20:46.619 "base_bdevs_list": [ 00:20:46.619 { 00:20:46.619 "name": "BaseBdev1", 00:20:46.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.619 "is_configured": false, 00:20:46.619 "data_offset": 0, 00:20:46.619 "data_size": 0 00:20:46.619 }, 00:20:46.619 { 00:20:46.619 "name": "BaseBdev2", 00:20:46.619 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:46.619 "is_configured": true, 00:20:46.619 "data_offset": 2048, 00:20:46.619 "data_size": 63488 00:20:46.619 }, 00:20:46.619 { 00:20:46.619 "name": "BaseBdev3", 00:20:46.619 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:46.619 "is_configured": true, 00:20:46.619 "data_offset": 2048, 00:20:46.619 "data_size": 63488 00:20:46.619 } 00:20:46.619 ] 00:20:46.619 }' 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.619 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.187 [2024-10-07 07:41:46.456140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.187 "name": "Existed_Raid", 00:20:47.187 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:47.187 "strip_size_kb": 64, 00:20:47.187 "state": "configuring", 00:20:47.187 "raid_level": "concat", 00:20:47.187 "superblock": true, 00:20:47.187 "num_base_bdevs": 3, 00:20:47.187 "num_base_bdevs_discovered": 1, 00:20:47.187 "num_base_bdevs_operational": 3, 00:20:47.187 "base_bdevs_list": [ 00:20:47.187 { 00:20:47.187 "name": "BaseBdev1", 00:20:47.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.187 "is_configured": false, 00:20:47.187 "data_offset": 0, 00:20:47.187 "data_size": 0 00:20:47.187 }, 00:20:47.187 { 00:20:47.187 "name": null, 00:20:47.187 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:47.187 "is_configured": false, 00:20:47.187 "data_offset": 0, 00:20:47.187 "data_size": 63488 00:20:47.187 }, 00:20:47.187 { 00:20:47.187 "name": "BaseBdev3", 00:20:47.187 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:47.187 "is_configured": true, 00:20:47.187 "data_offset": 2048, 00:20:47.187 "data_size": 63488 00:20:47.187 } 00:20:47.187 ] 00:20:47.187 }' 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.187 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.446 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.446 07:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.446 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.446 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.446 07:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.705 [2024-10-07 07:41:47.063769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.705 BaseBdev1 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.705 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.705 [ 00:20:47.705 { 00:20:47.705 "name": "BaseBdev1", 00:20:47.705 "aliases": [ 00:20:47.705 "caf6f6a9-e992-4953-b1ef-8aba68a9a45f" 00:20:47.705 ], 00:20:47.705 "product_name": "Malloc disk", 00:20:47.705 "block_size": 512, 00:20:47.705 "num_blocks": 65536, 00:20:47.705 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:47.705 "assigned_rate_limits": { 00:20:47.706 "rw_ios_per_sec": 0, 00:20:47.706 "rw_mbytes_per_sec": 0, 00:20:47.706 "r_mbytes_per_sec": 0, 00:20:47.706 "w_mbytes_per_sec": 0 00:20:47.706 }, 00:20:47.706 "claimed": true, 00:20:47.706 "claim_type": "exclusive_write", 00:20:47.706 "zoned": false, 00:20:47.706 "supported_io_types": { 00:20:47.706 "read": true, 00:20:47.706 "write": true, 00:20:47.706 "unmap": true, 00:20:47.706 "flush": true, 00:20:47.706 "reset": true, 00:20:47.706 "nvme_admin": false, 00:20:47.706 "nvme_io": false, 00:20:47.706 "nvme_io_md": false, 00:20:47.706 "write_zeroes": true, 00:20:47.706 "zcopy": true, 00:20:47.706 "get_zone_info": false, 00:20:47.706 "zone_management": false, 00:20:47.706 "zone_append": false, 00:20:47.706 "compare": false, 00:20:47.706 "compare_and_write": false, 00:20:47.706 "abort": true, 00:20:47.706 "seek_hole": false, 00:20:47.706 "seek_data": false, 00:20:47.706 "copy": true, 00:20:47.706 "nvme_iov_md": false 00:20:47.706 }, 00:20:47.706 "memory_domains": [ 00:20:47.706 { 00:20:47.706 "dma_device_id": "system", 00:20:47.706 "dma_device_type": 1 00:20:47.706 }, 00:20:47.706 { 00:20:47.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.706 "dma_device_type": 2 00:20:47.706 } 00:20:47.706 ], 00:20:47.706 "driver_specific": {} 00:20:47.706 } 00:20:47.706 ] 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.706 "name": "Existed_Raid", 00:20:47.706 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:47.706 "strip_size_kb": 64, 00:20:47.706 "state": "configuring", 00:20:47.706 "raid_level": "concat", 00:20:47.706 "superblock": true, 00:20:47.706 "num_base_bdevs": 3, 00:20:47.706 "num_base_bdevs_discovered": 2, 00:20:47.706 "num_base_bdevs_operational": 3, 00:20:47.706 "base_bdevs_list": [ 00:20:47.706 { 00:20:47.706 "name": "BaseBdev1", 00:20:47.706 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:47.706 "is_configured": true, 00:20:47.706 "data_offset": 2048, 00:20:47.706 "data_size": 63488 00:20:47.706 }, 00:20:47.706 { 00:20:47.706 "name": null, 00:20:47.706 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:47.706 "is_configured": false, 00:20:47.706 "data_offset": 0, 00:20:47.706 "data_size": 63488 00:20:47.706 }, 00:20:47.706 { 00:20:47.706 "name": "BaseBdev3", 00:20:47.706 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:47.706 "is_configured": true, 00:20:47.706 "data_offset": 2048, 00:20:47.706 "data_size": 63488 00:20:47.706 } 00:20:47.706 ] 00:20:47.706 }' 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.706 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.272 [2024-10-07 07:41:47.572032] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.272 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.272 "name": "Existed_Raid", 00:20:48.272 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:48.272 "strip_size_kb": 64, 00:20:48.272 "state": "configuring", 00:20:48.272 "raid_level": "concat", 00:20:48.272 "superblock": true, 00:20:48.272 "num_base_bdevs": 3, 00:20:48.272 "num_base_bdevs_discovered": 1, 00:20:48.272 "num_base_bdevs_operational": 3, 00:20:48.272 "base_bdevs_list": [ 00:20:48.272 { 00:20:48.272 "name": "BaseBdev1", 00:20:48.272 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:48.272 "is_configured": true, 00:20:48.273 "data_offset": 2048, 00:20:48.273 "data_size": 63488 00:20:48.273 }, 00:20:48.273 { 00:20:48.273 "name": null, 00:20:48.273 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:48.273 "is_configured": false, 00:20:48.273 "data_offset": 0, 00:20:48.273 "data_size": 63488 00:20:48.273 }, 00:20:48.273 { 00:20:48.273 "name": null, 00:20:48.273 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:48.273 "is_configured": false, 00:20:48.273 "data_offset": 0, 00:20:48.273 "data_size": 63488 00:20:48.273 } 00:20:48.273 ] 00:20:48.273 }' 00:20:48.273 07:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.273 07:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 [2024-10-07 07:41:48.056149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:48.789 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.789 "name": "Existed_Raid", 00:20:48.789 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:48.789 "strip_size_kb": 64, 00:20:48.789 "state": "configuring", 00:20:48.789 "raid_level": "concat", 00:20:48.789 "superblock": true, 00:20:48.789 "num_base_bdevs": 3, 00:20:48.789 "num_base_bdevs_discovered": 2, 00:20:48.789 "num_base_bdevs_operational": 3, 00:20:48.789 "base_bdevs_list": [ 00:20:48.789 { 00:20:48.789 "name": "BaseBdev1", 00:20:48.789 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:48.789 "is_configured": true, 00:20:48.789 "data_offset": 2048, 00:20:48.789 "data_size": 63488 00:20:48.789 }, 00:20:48.789 { 00:20:48.789 "name": null, 00:20:48.789 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:48.789 "is_configured": false, 00:20:48.789 "data_offset": 0, 00:20:48.789 "data_size": 63488 00:20:48.789 }, 00:20:48.789 { 00:20:48.789 "name": "BaseBdev3", 00:20:48.789 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:48.789 "is_configured": true, 00:20:48.789 "data_offset": 2048, 00:20:48.789 "data_size": 63488 00:20:48.789 } 00:20:48.789 ] 00:20:48.789 }' 00:20:48.789 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.789 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.046 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.046 [2024-10-07 07:41:48.584406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.304 "name": "Existed_Raid", 00:20:49.304 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:49.304 "strip_size_kb": 64, 00:20:49.304 "state": "configuring", 00:20:49.304 "raid_level": "concat", 00:20:49.304 "superblock": true, 00:20:49.304 "num_base_bdevs": 3, 00:20:49.304 "num_base_bdevs_discovered": 1, 00:20:49.304 "num_base_bdevs_operational": 3, 00:20:49.304 "base_bdevs_list": [ 00:20:49.304 { 00:20:49.304 "name": null, 00:20:49.304 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:49.304 "is_configured": false, 00:20:49.304 "data_offset": 0, 00:20:49.304 "data_size": 63488 00:20:49.304 }, 00:20:49.304 { 00:20:49.304 "name": null, 00:20:49.304 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:49.304 "is_configured": false, 00:20:49.304 "data_offset": 0, 00:20:49.304 "data_size": 63488 00:20:49.304 }, 00:20:49.304 { 00:20:49.304 "name": "BaseBdev3", 00:20:49.304 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:49.304 "is_configured": true, 00:20:49.304 "data_offset": 2048, 00:20:49.304 "data_size": 63488 00:20:49.304 } 00:20:49.304 ] 00:20:49.304 }' 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.304 07:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.911 [2024-10-07 07:41:49.212184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.911 "name": "Existed_Raid", 00:20:49.911 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:49.911 "strip_size_kb": 64, 00:20:49.911 "state": "configuring", 00:20:49.911 "raid_level": "concat", 00:20:49.911 "superblock": true, 00:20:49.911 "num_base_bdevs": 3, 00:20:49.911 "num_base_bdevs_discovered": 2, 00:20:49.911 "num_base_bdevs_operational": 3, 00:20:49.911 "base_bdevs_list": [ 00:20:49.911 { 00:20:49.911 "name": null, 00:20:49.911 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:49.911 "is_configured": false, 00:20:49.911 "data_offset": 0, 00:20:49.911 "data_size": 63488 00:20:49.911 }, 00:20:49.911 { 00:20:49.911 "name": "BaseBdev2", 00:20:49.911 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:49.911 "is_configured": true, 00:20:49.911 "data_offset": 2048, 00:20:49.911 "data_size": 63488 00:20:49.911 }, 00:20:49.911 { 00:20:49.911 "name": "BaseBdev3", 00:20:49.911 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:49.911 "is_configured": true, 00:20:49.911 "data_offset": 2048, 00:20:49.911 "data_size": 63488 00:20:49.911 } 00:20:49.911 ] 00:20:49.911 }' 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.911 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.169 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u caf6f6a9-e992-4953-b1ef-8aba68a9a45f 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.428 [2024-10-07 07:41:49.824998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:50.428 [2024-10-07 07:41:49.825270] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:50.428 [2024-10-07 07:41:49.825295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:50.428 [2024-10-07 07:41:49.825637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:50.428 NewBaseBdev 00:20:50.428 [2024-10-07 07:41:49.825810] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:50.428 [2024-10-07 07:41:49.825823] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:50.428 [2024-10-07 07:41:49.826004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.428 [ 00:20:50.428 { 00:20:50.428 "name": "NewBaseBdev", 00:20:50.428 "aliases": [ 00:20:50.428 "caf6f6a9-e992-4953-b1ef-8aba68a9a45f" 00:20:50.428 ], 00:20:50.428 "product_name": "Malloc disk", 00:20:50.428 "block_size": 512, 00:20:50.428 "num_blocks": 65536, 00:20:50.428 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:50.428 "assigned_rate_limits": { 00:20:50.428 "rw_ios_per_sec": 0, 00:20:50.428 "rw_mbytes_per_sec": 0, 00:20:50.428 "r_mbytes_per_sec": 0, 00:20:50.428 "w_mbytes_per_sec": 0 00:20:50.428 }, 00:20:50.428 "claimed": true, 00:20:50.428 "claim_type": "exclusive_write", 00:20:50.428 "zoned": false, 00:20:50.428 "supported_io_types": { 00:20:50.428 "read": true, 00:20:50.428 "write": true, 00:20:50.428 "unmap": true, 00:20:50.428 "flush": true, 00:20:50.428 "reset": true, 00:20:50.428 "nvme_admin": false, 00:20:50.428 "nvme_io": false, 00:20:50.428 "nvme_io_md": false, 00:20:50.428 "write_zeroes": true, 00:20:50.428 "zcopy": true, 00:20:50.428 "get_zone_info": false, 00:20:50.428 "zone_management": false, 00:20:50.428 "zone_append": false, 00:20:50.428 "compare": false, 00:20:50.428 "compare_and_write": false, 00:20:50.428 "abort": true, 00:20:50.428 "seek_hole": false, 00:20:50.428 "seek_data": false, 00:20:50.428 "copy": true, 00:20:50.428 "nvme_iov_md": false 00:20:50.428 }, 00:20:50.428 "memory_domains": [ 00:20:50.428 { 00:20:50.428 "dma_device_id": "system", 00:20:50.428 "dma_device_type": 1 00:20:50.428 }, 00:20:50.428 { 00:20:50.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.428 "dma_device_type": 2 00:20:50.428 } 00:20:50.428 ], 00:20:50.428 "driver_specific": {} 00:20:50.428 } 00:20:50.428 ] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.428 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.429 "name": "Existed_Raid", 00:20:50.429 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:50.429 "strip_size_kb": 64, 00:20:50.429 "state": "online", 00:20:50.429 "raid_level": "concat", 00:20:50.429 "superblock": true, 00:20:50.429 "num_base_bdevs": 3, 00:20:50.429 "num_base_bdevs_discovered": 3, 00:20:50.429 "num_base_bdevs_operational": 3, 00:20:50.429 "base_bdevs_list": [ 00:20:50.429 { 00:20:50.429 "name": "NewBaseBdev", 00:20:50.429 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:50.429 "is_configured": true, 00:20:50.429 "data_offset": 2048, 00:20:50.429 "data_size": 63488 00:20:50.429 }, 00:20:50.429 { 00:20:50.429 "name": "BaseBdev2", 00:20:50.429 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:50.429 "is_configured": true, 00:20:50.429 "data_offset": 2048, 00:20:50.429 "data_size": 63488 00:20:50.429 }, 00:20:50.429 { 00:20:50.429 "name": "BaseBdev3", 00:20:50.429 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:50.429 "is_configured": true, 00:20:50.429 "data_offset": 2048, 00:20:50.429 "data_size": 63488 00:20:50.429 } 00:20:50.429 ] 00:20:50.429 }' 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.429 07:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.996 [2024-10-07 07:41:50.325539] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:50.996 "name": "Existed_Raid", 00:20:50.996 "aliases": [ 00:20:50.996 "7a37f431-32cc-435d-895c-108c379200d0" 00:20:50.996 ], 00:20:50.996 "product_name": "Raid Volume", 00:20:50.996 "block_size": 512, 00:20:50.996 "num_blocks": 190464, 00:20:50.996 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:50.996 "assigned_rate_limits": { 00:20:50.996 "rw_ios_per_sec": 0, 00:20:50.996 "rw_mbytes_per_sec": 0, 00:20:50.996 "r_mbytes_per_sec": 0, 00:20:50.996 "w_mbytes_per_sec": 0 00:20:50.996 }, 00:20:50.996 "claimed": false, 00:20:50.996 "zoned": false, 00:20:50.996 "supported_io_types": { 00:20:50.996 "read": true, 00:20:50.996 "write": true, 00:20:50.996 "unmap": true, 00:20:50.996 "flush": true, 00:20:50.996 "reset": true, 00:20:50.996 "nvme_admin": false, 00:20:50.996 "nvme_io": false, 00:20:50.996 "nvme_io_md": false, 00:20:50.996 "write_zeroes": true, 00:20:50.996 "zcopy": false, 00:20:50.996 "get_zone_info": false, 00:20:50.996 "zone_management": false, 00:20:50.996 "zone_append": false, 00:20:50.996 "compare": false, 00:20:50.996 "compare_and_write": false, 00:20:50.996 "abort": false, 00:20:50.996 "seek_hole": false, 00:20:50.996 "seek_data": false, 00:20:50.996 "copy": false, 00:20:50.996 "nvme_iov_md": false 00:20:50.996 }, 00:20:50.996 "memory_domains": [ 00:20:50.996 { 00:20:50.996 "dma_device_id": "system", 00:20:50.996 "dma_device_type": 1 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.996 "dma_device_type": 2 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "dma_device_id": "system", 00:20:50.996 "dma_device_type": 1 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.996 "dma_device_type": 2 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "dma_device_id": "system", 00:20:50.996 "dma_device_type": 1 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.996 "dma_device_type": 2 00:20:50.996 } 00:20:50.996 ], 00:20:50.996 "driver_specific": { 00:20:50.996 "raid": { 00:20:50.996 "uuid": "7a37f431-32cc-435d-895c-108c379200d0", 00:20:50.996 "strip_size_kb": 64, 00:20:50.996 "state": "online", 00:20:50.996 "raid_level": "concat", 00:20:50.996 "superblock": true, 00:20:50.996 "num_base_bdevs": 3, 00:20:50.996 "num_base_bdevs_discovered": 3, 00:20:50.996 "num_base_bdevs_operational": 3, 00:20:50.996 "base_bdevs_list": [ 00:20:50.996 { 00:20:50.996 "name": "NewBaseBdev", 00:20:50.996 "uuid": "caf6f6a9-e992-4953-b1ef-8aba68a9a45f", 00:20:50.996 "is_configured": true, 00:20:50.996 "data_offset": 2048, 00:20:50.996 "data_size": 63488 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "name": "BaseBdev2", 00:20:50.996 "uuid": "9b0b1146-1d27-4902-a5ff-cab860183db3", 00:20:50.996 "is_configured": true, 00:20:50.996 "data_offset": 2048, 00:20:50.996 "data_size": 63488 00:20:50.996 }, 00:20:50.996 { 00:20:50.996 "name": "BaseBdev3", 00:20:50.996 "uuid": "beafada0-9017-4265-a6a6-6219c24e7dff", 00:20:50.996 "is_configured": true, 00:20:50.996 "data_offset": 2048, 00:20:50.996 "data_size": 63488 00:20:50.996 } 00:20:50.996 ] 00:20:50.996 } 00:20:50.996 } 00:20:50.996 }' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:50.996 BaseBdev2 00:20:50.996 BaseBdev3' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.996 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.256 [2024-10-07 07:41:50.617246] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.256 [2024-10-07 07:41:50.617283] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.256 [2024-10-07 07:41:50.617379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.256 [2024-10-07 07:41:50.617442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.256 [2024-10-07 07:41:50.617459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66310 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 66310 ']' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 66310 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 66310 00:20:51.256 killing process with pid 66310 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 66310' 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 66310 00:20:51.256 [2024-10-07 07:41:50.662970] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.256 07:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 66310 00:20:51.514 [2024-10-07 07:41:51.031428] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.415 07:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:53.415 00:20:53.415 real 0m11.609s 00:20:53.415 user 0m18.281s 00:20:53.415 sys 0m2.047s 00:20:53.415 07:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:53.415 ************************************ 00:20:53.415 END TEST raid_state_function_test_sb 00:20:53.415 ************************************ 00:20:53.415 07:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.415 07:41:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:53.415 07:41:52 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:20:53.415 07:41:52 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:53.415 07:41:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.415 ************************************ 00:20:53.415 START TEST raid_superblock_test 00:20:53.415 ************************************ 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test concat 3 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:53.415 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66941 00:20:53.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66941 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 66941 ']' 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:53.416 07:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.416 [2024-10-07 07:41:52.664601] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:53.416 [2024-10-07 07:41:52.664894] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66941 ] 00:20:53.416 [2024-10-07 07:41:52.853078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.674 [2024-10-07 07:41:53.123491] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.932 [2024-10-07 07:41:53.360412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.932 [2024-10-07 07:41:53.360730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.190 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:54.190 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:20:54.190 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:54.190 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.190 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.191 malloc1 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.191 [2024-10-07 07:41:53.695687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:54.191 [2024-10-07 07:41:53.695800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.191 [2024-10-07 07:41:53.695836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:54.191 [2024-10-07 07:41:53.695855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.191 [2024-10-07 07:41:53.698713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.191 [2024-10-07 07:41:53.698798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:54.191 pt1 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.191 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 malloc2 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 [2024-10-07 07:41:53.771230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:54.449 [2024-10-07 07:41:53.771302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.449 [2024-10-07 07:41:53.771334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:54.449 [2024-10-07 07:41:53.771348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.449 [2024-10-07 07:41:53.774329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.449 [2024-10-07 07:41:53.774548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:54.449 pt2 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 malloc3 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 [2024-10-07 07:41:53.831543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:54.449 [2024-10-07 07:41:53.831629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.449 [2024-10-07 07:41:53.831669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:54.449 [2024-10-07 07:41:53.831686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.449 [2024-10-07 07:41:53.834689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.449 [2024-10-07 07:41:53.834754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:54.449 pt3 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 [2024-10-07 07:41:53.843783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:54.449 [2024-10-07 07:41:53.846379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:54.449 [2024-10-07 07:41:53.846467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:54.449 [2024-10-07 07:41:53.846680] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:54.449 [2024-10-07 07:41:53.846700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:54.449 [2024-10-07 07:41:53.847068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:54.449 [2024-10-07 07:41:53.847278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:54.449 [2024-10-07 07:41:53.847298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:54.449 [2024-10-07 07:41:53.847565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.449 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.450 "name": "raid_bdev1", 00:20:54.450 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:54.450 "strip_size_kb": 64, 00:20:54.450 "state": "online", 00:20:54.450 "raid_level": "concat", 00:20:54.450 "superblock": true, 00:20:54.450 "num_base_bdevs": 3, 00:20:54.450 "num_base_bdevs_discovered": 3, 00:20:54.450 "num_base_bdevs_operational": 3, 00:20:54.450 "base_bdevs_list": [ 00:20:54.450 { 00:20:54.450 "name": "pt1", 00:20:54.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.450 "is_configured": true, 00:20:54.450 "data_offset": 2048, 00:20:54.450 "data_size": 63488 00:20:54.450 }, 00:20:54.450 { 00:20:54.450 "name": "pt2", 00:20:54.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.450 "is_configured": true, 00:20:54.450 "data_offset": 2048, 00:20:54.450 "data_size": 63488 00:20:54.450 }, 00:20:54.450 { 00:20:54.450 "name": "pt3", 00:20:54.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.450 "is_configured": true, 00:20:54.450 "data_offset": 2048, 00:20:54.450 "data_size": 63488 00:20:54.450 } 00:20:54.450 ] 00:20:54.450 }' 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.450 07:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.708 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.966 [2024-10-07 07:41:54.272171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.966 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.966 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:54.966 "name": "raid_bdev1", 00:20:54.966 "aliases": [ 00:20:54.966 "7b26ba68-98e6-4a4a-9576-1a7422afbf90" 00:20:54.966 ], 00:20:54.966 "product_name": "Raid Volume", 00:20:54.966 "block_size": 512, 00:20:54.966 "num_blocks": 190464, 00:20:54.966 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:54.966 "assigned_rate_limits": { 00:20:54.966 "rw_ios_per_sec": 0, 00:20:54.966 "rw_mbytes_per_sec": 0, 00:20:54.966 "r_mbytes_per_sec": 0, 00:20:54.966 "w_mbytes_per_sec": 0 00:20:54.966 }, 00:20:54.966 "claimed": false, 00:20:54.966 "zoned": false, 00:20:54.966 "supported_io_types": { 00:20:54.966 "read": true, 00:20:54.966 "write": true, 00:20:54.966 "unmap": true, 00:20:54.966 "flush": true, 00:20:54.966 "reset": true, 00:20:54.966 "nvme_admin": false, 00:20:54.966 "nvme_io": false, 00:20:54.966 "nvme_io_md": false, 00:20:54.966 "write_zeroes": true, 00:20:54.966 "zcopy": false, 00:20:54.966 "get_zone_info": false, 00:20:54.966 "zone_management": false, 00:20:54.966 "zone_append": false, 00:20:54.966 "compare": false, 00:20:54.966 "compare_and_write": false, 00:20:54.966 "abort": false, 00:20:54.966 "seek_hole": false, 00:20:54.966 "seek_data": false, 00:20:54.966 "copy": false, 00:20:54.966 "nvme_iov_md": false 00:20:54.966 }, 00:20:54.966 "memory_domains": [ 00:20:54.966 { 00:20:54.966 "dma_device_id": "system", 00:20:54.966 "dma_device_type": 1 00:20:54.966 }, 00:20:54.966 { 00:20:54.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.966 "dma_device_type": 2 00:20:54.966 }, 00:20:54.966 { 00:20:54.966 "dma_device_id": "system", 00:20:54.966 "dma_device_type": 1 00:20:54.966 }, 00:20:54.966 { 00:20:54.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.966 "dma_device_type": 2 00:20:54.966 }, 00:20:54.966 { 00:20:54.966 "dma_device_id": "system", 00:20:54.966 "dma_device_type": 1 00:20:54.966 }, 00:20:54.966 { 00:20:54.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.966 "dma_device_type": 2 00:20:54.966 } 00:20:54.966 ], 00:20:54.966 "driver_specific": { 00:20:54.966 "raid": { 00:20:54.966 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:54.966 "strip_size_kb": 64, 00:20:54.966 "state": "online", 00:20:54.966 "raid_level": "concat", 00:20:54.966 "superblock": true, 00:20:54.966 "num_base_bdevs": 3, 00:20:54.966 "num_base_bdevs_discovered": 3, 00:20:54.966 "num_base_bdevs_operational": 3, 00:20:54.966 "base_bdevs_list": [ 00:20:54.966 { 00:20:54.967 "name": "pt1", 00:20:54.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.967 "is_configured": true, 00:20:54.967 "data_offset": 2048, 00:20:54.967 "data_size": 63488 00:20:54.967 }, 00:20:54.967 { 00:20:54.967 "name": "pt2", 00:20:54.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.967 "is_configured": true, 00:20:54.967 "data_offset": 2048, 00:20:54.967 "data_size": 63488 00:20:54.967 }, 00:20:54.967 { 00:20:54.967 "name": "pt3", 00:20:54.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.967 "is_configured": true, 00:20:54.967 "data_offset": 2048, 00:20:54.967 "data_size": 63488 00:20:54.967 } 00:20:54.967 ] 00:20:54.967 } 00:20:54.967 } 00:20:54.967 }' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:54.967 pt2 00:20:54.967 pt3' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:54.967 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 [2024-10-07 07:41:54.532162] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7b26ba68-98e6-4a4a-9576-1a7422afbf90 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7b26ba68-98e6-4a4a-9576-1a7422afbf90 ']' 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 [2024-10-07 07:41:54.563834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.225 [2024-10-07 07:41:54.563875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:55.225 [2024-10-07 07:41:54.563966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.225 [2024-10-07 07:41:54.564040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:55.225 [2024-10-07 07:41:54.564057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.226 [2024-10-07 07:41:54.723919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:55.226 [2024-10-07 07:41:54.726497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:55.226 [2024-10-07 07:41:54.726561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:55.226 [2024-10-07 07:41:54.726634] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:55.226 [2024-10-07 07:41:54.726697] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:55.226 [2024-10-07 07:41:54.726751] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:55.226 [2024-10-07 07:41:54.726777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.226 [2024-10-07 07:41:54.726788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:55.226 request: 00:20:55.226 { 00:20:55.226 "name": "raid_bdev1", 00:20:55.226 "raid_level": "concat", 00:20:55.226 "base_bdevs": [ 00:20:55.226 "malloc1", 00:20:55.226 "malloc2", 00:20:55.226 "malloc3" 00:20:55.226 ], 00:20:55.226 "strip_size_kb": 64, 00:20:55.226 "superblock": false, 00:20:55.226 "method": "bdev_raid_create", 00:20:55.226 "req_id": 1 00:20:55.226 } 00:20:55.226 Got JSON-RPC error response 00:20:55.226 response: 00:20:55.226 { 00:20:55.226 "code": -17, 00:20:55.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:55.226 } 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.226 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.226 [2024-10-07 07:41:54.783886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:55.226 [2024-10-07 07:41:54.783959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.226 [2024-10-07 07:41:54.783987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:55.226 [2024-10-07 07:41:54.784001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.485 [2024-10-07 07:41:54.786927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.486 [2024-10-07 07:41:54.786977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:55.486 [2024-10-07 07:41:54.787086] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:55.486 [2024-10-07 07:41:54.787155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:55.486 pt1 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.486 "name": "raid_bdev1", 00:20:55.486 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:55.486 "strip_size_kb": 64, 00:20:55.486 "state": "configuring", 00:20:55.486 "raid_level": "concat", 00:20:55.486 "superblock": true, 00:20:55.486 "num_base_bdevs": 3, 00:20:55.486 "num_base_bdevs_discovered": 1, 00:20:55.486 "num_base_bdevs_operational": 3, 00:20:55.486 "base_bdevs_list": [ 00:20:55.486 { 00:20:55.486 "name": "pt1", 00:20:55.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.486 "is_configured": true, 00:20:55.486 "data_offset": 2048, 00:20:55.486 "data_size": 63488 00:20:55.486 }, 00:20:55.486 { 00:20:55.486 "name": null, 00:20:55.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.486 "is_configured": false, 00:20:55.486 "data_offset": 2048, 00:20:55.486 "data_size": 63488 00:20:55.486 }, 00:20:55.486 { 00:20:55.486 "name": null, 00:20:55.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:55.486 "is_configured": false, 00:20:55.486 "data_offset": 2048, 00:20:55.486 "data_size": 63488 00:20:55.486 } 00:20:55.486 ] 00:20:55.486 }' 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.486 07:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 [2024-10-07 07:41:55.183986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:55.745 [2024-10-07 07:41:55.184077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.745 [2024-10-07 07:41:55.184110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:55.745 [2024-10-07 07:41:55.184125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.745 [2024-10-07 07:41:55.184643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.745 [2024-10-07 07:41:55.184682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:55.745 [2024-10-07 07:41:55.184828] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:55.745 [2024-10-07 07:41:55.184859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:55.745 pt2 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 [2024-10-07 07:41:55.192038] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.745 "name": "raid_bdev1", 00:20:55.745 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:55.745 "strip_size_kb": 64, 00:20:55.745 "state": "configuring", 00:20:55.745 "raid_level": "concat", 00:20:55.745 "superblock": true, 00:20:55.745 "num_base_bdevs": 3, 00:20:55.745 "num_base_bdevs_discovered": 1, 00:20:55.745 "num_base_bdevs_operational": 3, 00:20:55.745 "base_bdevs_list": [ 00:20:55.745 { 00:20:55.745 "name": "pt1", 00:20:55.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.745 "is_configured": true, 00:20:55.745 "data_offset": 2048, 00:20:55.745 "data_size": 63488 00:20:55.745 }, 00:20:55.745 { 00:20:55.745 "name": null, 00:20:55.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.745 "is_configured": false, 00:20:55.745 "data_offset": 0, 00:20:55.745 "data_size": 63488 00:20:55.745 }, 00:20:55.745 { 00:20:55.745 "name": null, 00:20:55.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:55.745 "is_configured": false, 00:20:55.745 "data_offset": 2048, 00:20:55.745 "data_size": 63488 00:20:55.745 } 00:20:55.745 ] 00:20:55.745 }' 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.745 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:56.310 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.310 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:56.310 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.310 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 [2024-10-07 07:41:55.572040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:56.310 [2024-10-07 07:41:55.572125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.310 [2024-10-07 07:41:55.572150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:56.310 [2024-10-07 07:41:55.572167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.311 [2024-10-07 07:41:55.572688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.311 [2024-10-07 07:41:55.572750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:56.311 [2024-10-07 07:41:55.572848] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:56.311 [2024-10-07 07:41:55.572889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:56.311 pt2 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.311 [2024-10-07 07:41:55.584067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:56.311 [2024-10-07 07:41:55.584127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.311 [2024-10-07 07:41:55.584165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:56.311 [2024-10-07 07:41:55.584181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.311 [2024-10-07 07:41:55.584622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.311 [2024-10-07 07:41:55.584657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:56.311 [2024-10-07 07:41:55.584759] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:56.311 [2024-10-07 07:41:55.584794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:56.311 [2024-10-07 07:41:55.584932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:56.311 [2024-10-07 07:41:55.584948] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:56.311 [2024-10-07 07:41:55.585255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:56.311 [2024-10-07 07:41:55.585566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:56.311 [2024-10-07 07:41:55.585586] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:56.311 [2024-10-07 07:41:55.585769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.311 pt3 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.311 "name": "raid_bdev1", 00:20:56.311 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:56.311 "strip_size_kb": 64, 00:20:56.311 "state": "online", 00:20:56.311 "raid_level": "concat", 00:20:56.311 "superblock": true, 00:20:56.311 "num_base_bdevs": 3, 00:20:56.311 "num_base_bdevs_discovered": 3, 00:20:56.311 "num_base_bdevs_operational": 3, 00:20:56.311 "base_bdevs_list": [ 00:20:56.311 { 00:20:56.311 "name": "pt1", 00:20:56.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:56.311 "is_configured": true, 00:20:56.311 "data_offset": 2048, 00:20:56.311 "data_size": 63488 00:20:56.311 }, 00:20:56.311 { 00:20:56.311 "name": "pt2", 00:20:56.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:56.311 "is_configured": true, 00:20:56.311 "data_offset": 2048, 00:20:56.311 "data_size": 63488 00:20:56.311 }, 00:20:56.311 { 00:20:56.311 "name": "pt3", 00:20:56.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.311 "is_configured": true, 00:20:56.311 "data_offset": 2048, 00:20:56.311 "data_size": 63488 00:20:56.311 } 00:20:56.311 ] 00:20:56.311 }' 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.311 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:56.569 07:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.569 [2024-10-07 07:41:55.988508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:56.569 "name": "raid_bdev1", 00:20:56.569 "aliases": [ 00:20:56.569 "7b26ba68-98e6-4a4a-9576-1a7422afbf90" 00:20:56.569 ], 00:20:56.569 "product_name": "Raid Volume", 00:20:56.569 "block_size": 512, 00:20:56.569 "num_blocks": 190464, 00:20:56.569 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:56.569 "assigned_rate_limits": { 00:20:56.569 "rw_ios_per_sec": 0, 00:20:56.569 "rw_mbytes_per_sec": 0, 00:20:56.569 "r_mbytes_per_sec": 0, 00:20:56.569 "w_mbytes_per_sec": 0 00:20:56.569 }, 00:20:56.569 "claimed": false, 00:20:56.569 "zoned": false, 00:20:56.569 "supported_io_types": { 00:20:56.569 "read": true, 00:20:56.569 "write": true, 00:20:56.569 "unmap": true, 00:20:56.569 "flush": true, 00:20:56.569 "reset": true, 00:20:56.569 "nvme_admin": false, 00:20:56.569 "nvme_io": false, 00:20:56.569 "nvme_io_md": false, 00:20:56.569 "write_zeroes": true, 00:20:56.569 "zcopy": false, 00:20:56.569 "get_zone_info": false, 00:20:56.569 "zone_management": false, 00:20:56.569 "zone_append": false, 00:20:56.569 "compare": false, 00:20:56.569 "compare_and_write": false, 00:20:56.569 "abort": false, 00:20:56.569 "seek_hole": false, 00:20:56.569 "seek_data": false, 00:20:56.569 "copy": false, 00:20:56.569 "nvme_iov_md": false 00:20:56.569 }, 00:20:56.569 "memory_domains": [ 00:20:56.569 { 00:20:56.569 "dma_device_id": "system", 00:20:56.569 "dma_device_type": 1 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.569 "dma_device_type": 2 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "system", 00:20:56.569 "dma_device_type": 1 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.569 "dma_device_type": 2 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "system", 00:20:56.569 "dma_device_type": 1 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.569 "dma_device_type": 2 00:20:56.569 } 00:20:56.569 ], 00:20:56.569 "driver_specific": { 00:20:56.569 "raid": { 00:20:56.569 "uuid": "7b26ba68-98e6-4a4a-9576-1a7422afbf90", 00:20:56.569 "strip_size_kb": 64, 00:20:56.569 "state": "online", 00:20:56.569 "raid_level": "concat", 00:20:56.569 "superblock": true, 00:20:56.569 "num_base_bdevs": 3, 00:20:56.569 "num_base_bdevs_discovered": 3, 00:20:56.569 "num_base_bdevs_operational": 3, 00:20:56.569 "base_bdevs_list": [ 00:20:56.569 { 00:20:56.569 "name": "pt1", 00:20:56.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 2048, 00:20:56.569 "data_size": 63488 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "name": "pt2", 00:20:56.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 2048, 00:20:56.569 "data_size": 63488 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "name": "pt3", 00:20:56.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 2048, 00:20:56.569 "data_size": 63488 00:20:56.569 } 00:20:56.569 ] 00:20:56.569 } 00:20:56.569 } 00:20:56.569 }' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:56.569 pt2 00:20:56.569 pt3' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.569 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 [2024-10-07 07:41:56.288583] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7b26ba68-98e6-4a4a-9576-1a7422afbf90 '!=' 7b26ba68-98e6-4a4a-9576-1a7422afbf90 ']' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66941 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 66941 ']' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 66941 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 66941 00:20:56.826 killing process with pid 66941 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 66941' 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 66941 00:20:56.826 [2024-10-07 07:41:56.376418] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.826 07:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 66941 00:20:56.826 [2024-10-07 07:41:56.376530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.826 [2024-10-07 07:41:56.376604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.826 [2024-10-07 07:41:56.376624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:57.392 [2024-10-07 07:41:56.734819] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.765 07:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:58.765 00:20:58.765 real 0m5.677s 00:20:58.765 user 0m7.881s 00:20:58.765 sys 0m1.026s 00:20:58.765 07:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:20:58.765 ************************************ 00:20:58.765 END TEST raid_superblock_test 00:20:58.765 ************************************ 00:20:58.765 07:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.765 07:41:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:58.765 07:41:58 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:20:58.765 07:41:58 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:20:58.765 07:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.765 ************************************ 00:20:58.765 START TEST raid_read_error_test 00:20:58.765 ************************************ 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 3 read 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:58.765 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qk3Mu3zJsp 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67201 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67201 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 67201 ']' 00:20:58.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:20:58.766 07:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.024 [2024-10-07 07:41:58.426189] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:20:59.024 [2024-10-07 07:41:58.426597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67201 ] 00:20:59.282 [2024-10-07 07:41:58.616598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.540 [2024-10-07 07:41:58.933894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.798 [2024-10-07 07:41:59.171643] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.798 [2024-10-07 07:41:59.171690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:20:59.798 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 BaseBdev1_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 true 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 [2024-10-07 07:41:59.370435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:00.071 [2024-10-07 07:41:59.370652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.071 [2024-10-07 07:41:59.370682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:00.071 [2024-10-07 07:41:59.370698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.071 [2024-10-07 07:41:59.373224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.071 [2024-10-07 07:41:59.373268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:00.071 BaseBdev1 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 BaseBdev2_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 true 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 [2024-10-07 07:41:59.452336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:00.071 [2024-10-07 07:41:59.452398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.071 [2024-10-07 07:41:59.452419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:00.071 [2024-10-07 07:41:59.452434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.071 [2024-10-07 07:41:59.454999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.071 [2024-10-07 07:41:59.455047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:00.071 BaseBdev2 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 BaseBdev3_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 true 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.071 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.071 [2024-10-07 07:41:59.521466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:00.071 [2024-10-07 07:41:59.521526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.071 [2024-10-07 07:41:59.521547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:00.072 [2024-10-07 07:41:59.521562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.072 [2024-10-07 07:41:59.524083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.072 [2024-10-07 07:41:59.524124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:00.072 BaseBdev3 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.072 [2024-10-07 07:41:59.529552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.072 [2024-10-07 07:41:59.531635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.072 [2024-10-07 07:41:59.531834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.072 [2024-10-07 07:41:59.532075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:00.072 [2024-10-07 07:41:59.532180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:00.072 [2024-10-07 07:41:59.532555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:00.072 [2024-10-07 07:41:59.532859] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:00.072 [2024-10-07 07:41:59.532979] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:00.072 [2024-10-07 07:41:59.533242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.072 "name": "raid_bdev1", 00:21:00.072 "uuid": "d855b688-fdb4-48c3-938f-99313f62895b", 00:21:00.072 "strip_size_kb": 64, 00:21:00.072 "state": "online", 00:21:00.072 "raid_level": "concat", 00:21:00.072 "superblock": true, 00:21:00.072 "num_base_bdevs": 3, 00:21:00.072 "num_base_bdevs_discovered": 3, 00:21:00.072 "num_base_bdevs_operational": 3, 00:21:00.072 "base_bdevs_list": [ 00:21:00.072 { 00:21:00.072 "name": "BaseBdev1", 00:21:00.072 "uuid": "88704bbb-1bbd-5ace-8ae9-89beb274c281", 00:21:00.072 "is_configured": true, 00:21:00.072 "data_offset": 2048, 00:21:00.072 "data_size": 63488 00:21:00.072 }, 00:21:00.072 { 00:21:00.072 "name": "BaseBdev2", 00:21:00.072 "uuid": "1fed6c1f-da25-5793-9361-3600d6918a2b", 00:21:00.072 "is_configured": true, 00:21:00.072 "data_offset": 2048, 00:21:00.072 "data_size": 63488 00:21:00.072 }, 00:21:00.072 { 00:21:00.072 "name": "BaseBdev3", 00:21:00.072 "uuid": "7af0dbd2-3d4c-50f3-9adc-cdd5a2ad9872", 00:21:00.072 "is_configured": true, 00:21:00.072 "data_offset": 2048, 00:21:00.072 "data_size": 63488 00:21:00.072 } 00:21:00.072 ] 00:21:00.072 }' 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.072 07:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.638 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:00.638 07:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:00.638 [2024-10-07 07:42:00.099150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:01.572 07:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.572 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:01.572 07:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.572 "name": "raid_bdev1", 00:21:01.572 "uuid": "d855b688-fdb4-48c3-938f-99313f62895b", 00:21:01.572 "strip_size_kb": 64, 00:21:01.572 "state": "online", 00:21:01.572 "raid_level": "concat", 00:21:01.572 "superblock": true, 00:21:01.572 "num_base_bdevs": 3, 00:21:01.572 "num_base_bdevs_discovered": 3, 00:21:01.572 "num_base_bdevs_operational": 3, 00:21:01.572 "base_bdevs_list": [ 00:21:01.572 { 00:21:01.572 "name": "BaseBdev1", 00:21:01.572 "uuid": "88704bbb-1bbd-5ace-8ae9-89beb274c281", 00:21:01.572 "is_configured": true, 00:21:01.572 "data_offset": 2048, 00:21:01.572 "data_size": 63488 00:21:01.572 }, 00:21:01.572 { 00:21:01.572 "name": "BaseBdev2", 00:21:01.572 "uuid": "1fed6c1f-da25-5793-9361-3600d6918a2b", 00:21:01.572 "is_configured": true, 00:21:01.572 "data_offset": 2048, 00:21:01.572 "data_size": 63488 00:21:01.572 }, 00:21:01.572 { 00:21:01.572 "name": "BaseBdev3", 00:21:01.572 "uuid": "7af0dbd2-3d4c-50f3-9adc-cdd5a2ad9872", 00:21:01.572 "is_configured": true, 00:21:01.572 "data_offset": 2048, 00:21:01.572 "data_size": 63488 00:21:01.572 } 00:21:01.572 ] 00:21:01.572 }' 00:21:01.572 07:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.572 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.139 [2024-10-07 07:42:01.432891] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.139 [2024-10-07 07:42:01.435543] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.139 [2024-10-07 07:42:01.440182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.139 [2024-10-07 07:42:01.440410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.139 [2024-10-07 07:42:01.440609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.139 { 00:21:02.139 "results": [ 00:21:02.139 { 00:21:02.139 "job": "raid_bdev1", 00:21:02.139 "core_mask": "0x1", 00:21:02.139 "workload": "randrw", 00:21:02.139 "percentage": 50, 00:21:02.139 "status": "finished", 00:21:02.139 "queue_depth": 1, 00:21:02.139 "io_size": 131072, 00:21:02.139 "runtime": 1.334192, 00:21:02.139 "iops": 14373.493470205189, 00:21:02.139 "mibps": 1796.6866837756486, 00:21:02.139 "io_failed": 1, 00:21:02.139 "io_timeout": 0, 00:21:02.139 "avg_latency_us": 96.32461004424714, 00:21:02.139 "min_latency_us": 27.184761904761906, 00:21:02.139 "max_latency_us": 3339.215238095238 00:21:02.139 } 00:21:02.139 ], 00:21:02.139 "core_count": 1 00:21:02.139 } 00:21:02.139 [2024-10-07 07:42:01.440903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67201 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 67201 ']' 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 67201 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 67201 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 67201' 00:21:02.139 killing process with pid 67201 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 67201 00:21:02.139 [2024-10-07 07:42:01.487792] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.139 07:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 67201 00:21:02.397 [2024-10-07 07:42:01.852552] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qk3Mu3zJsp 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:21:04.305 ************************************ 00:21:04.305 END TEST raid_read_error_test 00:21:04.305 ************************************ 00:21:04.305 00:21:04.305 real 0m5.249s 00:21:04.305 user 0m6.096s 00:21:04.305 sys 0m0.659s 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:04.305 07:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.305 07:42:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:21:04.305 07:42:03 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:04.305 07:42:03 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:04.305 07:42:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.305 ************************************ 00:21:04.305 START TEST raid_write_error_test 00:21:04.305 ************************************ 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 3 write 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BiLTXM8KNX 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67352 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67352 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 67352 ']' 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:04.305 07:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.305 [2024-10-07 07:42:03.729012] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:04.305 [2024-10-07 07:42:03.729212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67352 ] 00:21:04.563 [2024-10-07 07:42:03.907182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.821 [2024-10-07 07:42:04.216524] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.079 [2024-10-07 07:42:04.451997] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.079 [2024-10-07 07:42:04.452032] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.337 BaseBdev1_malloc 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.337 true 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.337 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.337 [2024-10-07 07:42:04.725420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:05.337 [2024-10-07 07:42:04.725493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.337 [2024-10-07 07:42:04.725516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:05.338 [2024-10-07 07:42:04.725532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.338 [2024-10-07 07:42:04.728213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.338 [2024-10-07 07:42:04.728258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.338 BaseBdev1 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 BaseBdev2_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 true 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 [2024-10-07 07:42:04.799736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:05.338 [2024-10-07 07:42:04.799824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.338 [2024-10-07 07:42:04.799863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:05.338 [2024-10-07 07:42:04.799879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.338 [2024-10-07 07:42:04.802400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.338 [2024-10-07 07:42:04.802446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.338 BaseBdev2 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 BaseBdev3_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 true 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 [2024-10-07 07:42:04.867124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:05.338 [2024-10-07 07:42:04.867190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.338 [2024-10-07 07:42:04.867209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:05.338 [2024-10-07 07:42:04.867223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.338 [2024-10-07 07:42:04.869892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.338 [2024-10-07 07:42:04.870105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:05.338 BaseBdev3 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.338 [2024-10-07 07:42:04.879254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.338 [2024-10-07 07:42:04.881604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.338 [2024-10-07 07:42:04.881838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.338 [2024-10-07 07:42:04.882041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:05.338 [2024-10-07 07:42:04.882054] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:05.338 [2024-10-07 07:42:04.882320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:05.338 [2024-10-07 07:42:04.882475] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:05.338 [2024-10-07 07:42:04.882495] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:05.338 [2024-10-07 07:42:04.882642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:05.338 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.597 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:05.597 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.597 "name": "raid_bdev1", 00:21:05.597 "uuid": "6ce0ef93-ceb5-4299-80d1-2fa68adb50fe", 00:21:05.597 "strip_size_kb": 64, 00:21:05.597 "state": "online", 00:21:05.597 "raid_level": "concat", 00:21:05.597 "superblock": true, 00:21:05.597 "num_base_bdevs": 3, 00:21:05.597 "num_base_bdevs_discovered": 3, 00:21:05.597 "num_base_bdevs_operational": 3, 00:21:05.597 "base_bdevs_list": [ 00:21:05.597 { 00:21:05.597 "name": "BaseBdev1", 00:21:05.597 "uuid": "1cbf545b-47f1-5d83-baa6-1502b2478357", 00:21:05.597 "is_configured": true, 00:21:05.597 "data_offset": 2048, 00:21:05.597 "data_size": 63488 00:21:05.597 }, 00:21:05.597 { 00:21:05.597 "name": "BaseBdev2", 00:21:05.597 "uuid": "52a28037-db54-58f7-9987-442b06a24133", 00:21:05.597 "is_configured": true, 00:21:05.597 "data_offset": 2048, 00:21:05.597 "data_size": 63488 00:21:05.597 }, 00:21:05.597 { 00:21:05.597 "name": "BaseBdev3", 00:21:05.597 "uuid": "3eecebf3-0bff-5017-bdcf-19489cf855a4", 00:21:05.597 "is_configured": true, 00:21:05.597 "data_offset": 2048, 00:21:05.597 "data_size": 63488 00:21:05.597 } 00:21:05.597 ] 00:21:05.597 }' 00:21:05.597 07:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.597 07:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.855 07:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:05.855 07:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:06.112 [2024-10-07 07:42:05.476823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.046 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:07.047 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.047 "name": "raid_bdev1", 00:21:07.047 "uuid": "6ce0ef93-ceb5-4299-80d1-2fa68adb50fe", 00:21:07.047 "strip_size_kb": 64, 00:21:07.047 "state": "online", 00:21:07.047 "raid_level": "concat", 00:21:07.047 "superblock": true, 00:21:07.047 "num_base_bdevs": 3, 00:21:07.047 "num_base_bdevs_discovered": 3, 00:21:07.047 "num_base_bdevs_operational": 3, 00:21:07.047 "base_bdevs_list": [ 00:21:07.047 { 00:21:07.047 "name": "BaseBdev1", 00:21:07.047 "uuid": "1cbf545b-47f1-5d83-baa6-1502b2478357", 00:21:07.047 "is_configured": true, 00:21:07.047 "data_offset": 2048, 00:21:07.047 "data_size": 63488 00:21:07.047 }, 00:21:07.047 { 00:21:07.047 "name": "BaseBdev2", 00:21:07.047 "uuid": "52a28037-db54-58f7-9987-442b06a24133", 00:21:07.047 "is_configured": true, 00:21:07.047 "data_offset": 2048, 00:21:07.047 "data_size": 63488 00:21:07.047 }, 00:21:07.047 { 00:21:07.047 "name": "BaseBdev3", 00:21:07.047 "uuid": "3eecebf3-0bff-5017-bdcf-19489cf855a4", 00:21:07.047 "is_configured": true, 00:21:07.047 "data_offset": 2048, 00:21:07.047 "data_size": 63488 00:21:07.047 } 00:21:07.047 ] 00:21:07.047 }' 00:21:07.047 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.047 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.305 [2024-10-07 07:42:06.811257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.305 [2024-10-07 07:42:06.811290] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.305 [2024-10-07 07:42:06.814364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.305 [2024-10-07 07:42:06.814416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.305 [2024-10-07 07:42:06.814459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.305 [2024-10-07 07:42:06.814471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:07.305 { 00:21:07.305 "results": [ 00:21:07.305 { 00:21:07.305 "job": "raid_bdev1", 00:21:07.305 "core_mask": "0x1", 00:21:07.305 "workload": "randrw", 00:21:07.305 "percentage": 50, 00:21:07.305 "status": "finished", 00:21:07.305 "queue_depth": 1, 00:21:07.305 "io_size": 131072, 00:21:07.305 "runtime": 1.331785, 00:21:07.305 "iops": 13119.985583258558, 00:21:07.305 "mibps": 1639.9981979073198, 00:21:07.305 "io_failed": 1, 00:21:07.305 "io_timeout": 0, 00:21:07.305 "avg_latency_us": 105.12833237953532, 00:21:07.305 "min_latency_us": 27.794285714285714, 00:21:07.305 "max_latency_us": 1614.9942857142858 00:21:07.305 } 00:21:07.305 ], 00:21:07.305 "core_count": 1 00:21:07.305 } 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67352 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 67352 ']' 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 67352 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 67352 00:21:07.305 killing process with pid 67352 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 67352' 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 67352 00:21:07.305 [2024-10-07 07:42:06.861299] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.305 07:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 67352 00:21:07.871 [2024-10-07 07:42:07.131124] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BiLTXM8KNX 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:21:09.805 00:21:09.805 real 0m5.261s 00:21:09.805 user 0m6.224s 00:21:09.805 sys 0m0.638s 00:21:09.805 ************************************ 00:21:09.805 END TEST raid_write_error_test 00:21:09.805 ************************************ 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:09.805 07:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.805 07:42:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:09.805 07:42:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:09.805 07:42:08 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:09.805 07:42:08 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:09.805 07:42:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:09.805 ************************************ 00:21:09.805 START TEST raid_state_function_test 00:21:09.805 ************************************ 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 3 false 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:09.805 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:09.806 Process raid pid: 67501 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67501 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67501' 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67501 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 67501 ']' 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:09.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:09.806 07:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.806 [2024-10-07 07:42:09.054847] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:09.806 [2024-10-07 07:42:09.055019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.806 [2024-10-07 07:42:09.246950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.370 [2024-10-07 07:42:09.674866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.628 [2024-10-07 07:42:09.975933] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.628 [2024-10-07 07:42:09.975999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.628 [2024-10-07 07:42:10.171194] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:10.628 [2024-10-07 07:42:10.171275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:10.628 [2024-10-07 07:42:10.171290] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:10.628 [2024-10-07 07:42:10.171309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:10.628 [2024-10-07 07:42:10.171318] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:10.628 [2024-10-07 07:42:10.171334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:10.628 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.887 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:10.887 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.887 "name": "Existed_Raid", 00:21:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.887 "strip_size_kb": 0, 00:21:10.887 "state": "configuring", 00:21:10.887 "raid_level": "raid1", 00:21:10.887 "superblock": false, 00:21:10.887 "num_base_bdevs": 3, 00:21:10.887 "num_base_bdevs_discovered": 0, 00:21:10.887 "num_base_bdevs_operational": 3, 00:21:10.887 "base_bdevs_list": [ 00:21:10.887 { 00:21:10.887 "name": "BaseBdev1", 00:21:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.887 "is_configured": false, 00:21:10.887 "data_offset": 0, 00:21:10.887 "data_size": 0 00:21:10.887 }, 00:21:10.887 { 00:21:10.887 "name": "BaseBdev2", 00:21:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.887 "is_configured": false, 00:21:10.887 "data_offset": 0, 00:21:10.887 "data_size": 0 00:21:10.887 }, 00:21:10.887 { 00:21:10.887 "name": "BaseBdev3", 00:21:10.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.887 "is_configured": false, 00:21:10.887 "data_offset": 0, 00:21:10.887 "data_size": 0 00:21:10.887 } 00:21:10.887 ] 00:21:10.887 }' 00:21:10.887 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.887 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.146 [2024-10-07 07:42:10.659265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:11.146 [2024-10-07 07:42:10.659330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.146 [2024-10-07 07:42:10.667215] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.146 [2024-10-07 07:42:10.667468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.146 [2024-10-07 07:42:10.667489] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.146 [2024-10-07 07:42:10.667504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.146 [2024-10-07 07:42:10.667513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.146 [2024-10-07 07:42:10.667526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.146 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.405 [2024-10-07 07:42:10.735560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.405 BaseBdev1 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.405 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.405 [ 00:21:11.405 { 00:21:11.405 "name": "BaseBdev1", 00:21:11.405 "aliases": [ 00:21:11.405 "24cfc2b0-14f7-4951-8f70-75f028aefe1c" 00:21:11.405 ], 00:21:11.405 "product_name": "Malloc disk", 00:21:11.405 "block_size": 512, 00:21:11.405 "num_blocks": 65536, 00:21:11.405 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:11.405 "assigned_rate_limits": { 00:21:11.405 "rw_ios_per_sec": 0, 00:21:11.405 "rw_mbytes_per_sec": 0, 00:21:11.405 "r_mbytes_per_sec": 0, 00:21:11.405 "w_mbytes_per_sec": 0 00:21:11.405 }, 00:21:11.405 "claimed": true, 00:21:11.405 "claim_type": "exclusive_write", 00:21:11.405 "zoned": false, 00:21:11.405 "supported_io_types": { 00:21:11.405 "read": true, 00:21:11.405 "write": true, 00:21:11.405 "unmap": true, 00:21:11.405 "flush": true, 00:21:11.405 "reset": true, 00:21:11.405 "nvme_admin": false, 00:21:11.405 "nvme_io": false, 00:21:11.406 "nvme_io_md": false, 00:21:11.406 "write_zeroes": true, 00:21:11.406 "zcopy": true, 00:21:11.406 "get_zone_info": false, 00:21:11.406 "zone_management": false, 00:21:11.406 "zone_append": false, 00:21:11.406 "compare": false, 00:21:11.406 "compare_and_write": false, 00:21:11.406 "abort": true, 00:21:11.406 "seek_hole": false, 00:21:11.406 "seek_data": false, 00:21:11.406 "copy": true, 00:21:11.406 "nvme_iov_md": false 00:21:11.406 }, 00:21:11.406 "memory_domains": [ 00:21:11.406 { 00:21:11.406 "dma_device_id": "system", 00:21:11.406 "dma_device_type": 1 00:21:11.406 }, 00:21:11.406 { 00:21:11.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.406 "dma_device_type": 2 00:21:11.406 } 00:21:11.406 ], 00:21:11.406 "driver_specific": {} 00:21:11.406 } 00:21:11.406 ] 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.406 "name": "Existed_Raid", 00:21:11.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.406 "strip_size_kb": 0, 00:21:11.406 "state": "configuring", 00:21:11.406 "raid_level": "raid1", 00:21:11.406 "superblock": false, 00:21:11.406 "num_base_bdevs": 3, 00:21:11.406 "num_base_bdevs_discovered": 1, 00:21:11.406 "num_base_bdevs_operational": 3, 00:21:11.406 "base_bdevs_list": [ 00:21:11.406 { 00:21:11.406 "name": "BaseBdev1", 00:21:11.406 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:11.406 "is_configured": true, 00:21:11.406 "data_offset": 0, 00:21:11.406 "data_size": 65536 00:21:11.406 }, 00:21:11.406 { 00:21:11.406 "name": "BaseBdev2", 00:21:11.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.406 "is_configured": false, 00:21:11.406 "data_offset": 0, 00:21:11.406 "data_size": 0 00:21:11.406 }, 00:21:11.406 { 00:21:11.406 "name": "BaseBdev3", 00:21:11.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.406 "is_configured": false, 00:21:11.406 "data_offset": 0, 00:21:11.406 "data_size": 0 00:21:11.406 } 00:21:11.406 ] 00:21:11.406 }' 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.406 07:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 [2024-10-07 07:42:11.167783] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:11.665 [2024-10-07 07:42:11.167879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 [2024-10-07 07:42:11.179758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.665 [2024-10-07 07:42:11.182512] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.665 [2024-10-07 07:42:11.182565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.665 [2024-10-07 07:42:11.182578] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.665 [2024-10-07 07:42:11.182592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.665 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.666 "name": "Existed_Raid", 00:21:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.666 "strip_size_kb": 0, 00:21:11.666 "state": "configuring", 00:21:11.666 "raid_level": "raid1", 00:21:11.666 "superblock": false, 00:21:11.666 "num_base_bdevs": 3, 00:21:11.666 "num_base_bdevs_discovered": 1, 00:21:11.666 "num_base_bdevs_operational": 3, 00:21:11.666 "base_bdevs_list": [ 00:21:11.666 { 00:21:11.666 "name": "BaseBdev1", 00:21:11.666 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:11.666 "is_configured": true, 00:21:11.666 "data_offset": 0, 00:21:11.666 "data_size": 65536 00:21:11.666 }, 00:21:11.666 { 00:21:11.666 "name": "BaseBdev2", 00:21:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.666 "is_configured": false, 00:21:11.666 "data_offset": 0, 00:21:11.666 "data_size": 0 00:21:11.666 }, 00:21:11.666 { 00:21:11.666 "name": "BaseBdev3", 00:21:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.666 "is_configured": false, 00:21:11.666 "data_offset": 0, 00:21:11.666 "data_size": 0 00:21:11.666 } 00:21:11.666 ] 00:21:11.666 }' 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.666 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.234 [2024-10-07 07:42:11.636911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:12.234 BaseBdev2 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.234 [ 00:21:12.234 { 00:21:12.234 "name": "BaseBdev2", 00:21:12.234 "aliases": [ 00:21:12.234 "28dbf8ec-4630-4892-90ab-1871fcd0683d" 00:21:12.234 ], 00:21:12.234 "product_name": "Malloc disk", 00:21:12.234 "block_size": 512, 00:21:12.234 "num_blocks": 65536, 00:21:12.234 "uuid": "28dbf8ec-4630-4892-90ab-1871fcd0683d", 00:21:12.234 "assigned_rate_limits": { 00:21:12.234 "rw_ios_per_sec": 0, 00:21:12.234 "rw_mbytes_per_sec": 0, 00:21:12.234 "r_mbytes_per_sec": 0, 00:21:12.234 "w_mbytes_per_sec": 0 00:21:12.234 }, 00:21:12.234 "claimed": true, 00:21:12.234 "claim_type": "exclusive_write", 00:21:12.234 "zoned": false, 00:21:12.234 "supported_io_types": { 00:21:12.234 "read": true, 00:21:12.234 "write": true, 00:21:12.234 "unmap": true, 00:21:12.234 "flush": true, 00:21:12.234 "reset": true, 00:21:12.234 "nvme_admin": false, 00:21:12.234 "nvme_io": false, 00:21:12.234 "nvme_io_md": false, 00:21:12.234 "write_zeroes": true, 00:21:12.234 "zcopy": true, 00:21:12.234 "get_zone_info": false, 00:21:12.234 "zone_management": false, 00:21:12.234 "zone_append": false, 00:21:12.234 "compare": false, 00:21:12.234 "compare_and_write": false, 00:21:12.234 "abort": true, 00:21:12.234 "seek_hole": false, 00:21:12.234 "seek_data": false, 00:21:12.234 "copy": true, 00:21:12.234 "nvme_iov_md": false 00:21:12.234 }, 00:21:12.234 "memory_domains": [ 00:21:12.234 { 00:21:12.234 "dma_device_id": "system", 00:21:12.234 "dma_device_type": 1 00:21:12.234 }, 00:21:12.234 { 00:21:12.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.234 "dma_device_type": 2 00:21:12.234 } 00:21:12.234 ], 00:21:12.234 "driver_specific": {} 00:21:12.234 } 00:21:12.234 ] 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.234 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.235 "name": "Existed_Raid", 00:21:12.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.235 "strip_size_kb": 0, 00:21:12.235 "state": "configuring", 00:21:12.235 "raid_level": "raid1", 00:21:12.235 "superblock": false, 00:21:12.235 "num_base_bdevs": 3, 00:21:12.235 "num_base_bdevs_discovered": 2, 00:21:12.235 "num_base_bdevs_operational": 3, 00:21:12.235 "base_bdevs_list": [ 00:21:12.235 { 00:21:12.235 "name": "BaseBdev1", 00:21:12.235 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:12.235 "is_configured": true, 00:21:12.235 "data_offset": 0, 00:21:12.235 "data_size": 65536 00:21:12.235 }, 00:21:12.235 { 00:21:12.235 "name": "BaseBdev2", 00:21:12.235 "uuid": "28dbf8ec-4630-4892-90ab-1871fcd0683d", 00:21:12.235 "is_configured": true, 00:21:12.235 "data_offset": 0, 00:21:12.235 "data_size": 65536 00:21:12.235 }, 00:21:12.235 { 00:21:12.235 "name": "BaseBdev3", 00:21:12.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.235 "is_configured": false, 00:21:12.235 "data_offset": 0, 00:21:12.235 "data_size": 0 00:21:12.235 } 00:21:12.235 ] 00:21:12.235 }' 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.235 07:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 [2024-10-07 07:42:12.159763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.814 [2024-10-07 07:42:12.159834] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:12.814 [2024-10-07 07:42:12.159860] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:12.814 [2024-10-07 07:42:12.160182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:12.814 [2024-10-07 07:42:12.160371] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:12.814 [2024-10-07 07:42:12.160382] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:12.814 [2024-10-07 07:42:12.160713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.814 BaseBdev3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 [ 00:21:12.814 { 00:21:12.814 "name": "BaseBdev3", 00:21:12.814 "aliases": [ 00:21:12.814 "fb1a22d7-d39d-4475-b15d-c1dccda12487" 00:21:12.814 ], 00:21:12.814 "product_name": "Malloc disk", 00:21:12.814 "block_size": 512, 00:21:12.814 "num_blocks": 65536, 00:21:12.814 "uuid": "fb1a22d7-d39d-4475-b15d-c1dccda12487", 00:21:12.814 "assigned_rate_limits": { 00:21:12.814 "rw_ios_per_sec": 0, 00:21:12.814 "rw_mbytes_per_sec": 0, 00:21:12.814 "r_mbytes_per_sec": 0, 00:21:12.814 "w_mbytes_per_sec": 0 00:21:12.814 }, 00:21:12.814 "claimed": true, 00:21:12.814 "claim_type": "exclusive_write", 00:21:12.814 "zoned": false, 00:21:12.814 "supported_io_types": { 00:21:12.814 "read": true, 00:21:12.814 "write": true, 00:21:12.814 "unmap": true, 00:21:12.814 "flush": true, 00:21:12.814 "reset": true, 00:21:12.814 "nvme_admin": false, 00:21:12.814 "nvme_io": false, 00:21:12.814 "nvme_io_md": false, 00:21:12.814 "write_zeroes": true, 00:21:12.814 "zcopy": true, 00:21:12.814 "get_zone_info": false, 00:21:12.814 "zone_management": false, 00:21:12.814 "zone_append": false, 00:21:12.814 "compare": false, 00:21:12.814 "compare_and_write": false, 00:21:12.814 "abort": true, 00:21:12.814 "seek_hole": false, 00:21:12.814 "seek_data": false, 00:21:12.814 "copy": true, 00:21:12.814 "nvme_iov_md": false 00:21:12.814 }, 00:21:12.814 "memory_domains": [ 00:21:12.814 { 00:21:12.814 "dma_device_id": "system", 00:21:12.814 "dma_device_type": 1 00:21:12.814 }, 00:21:12.814 { 00:21:12.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.814 "dma_device_type": 2 00:21:12.814 } 00:21:12.814 ], 00:21:12.814 "driver_specific": {} 00:21:12.814 } 00:21:12.814 ] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.814 "name": "Existed_Raid", 00:21:12.814 "uuid": "da6f9744-b799-48d1-a97b-a3a60655cee8", 00:21:12.814 "strip_size_kb": 0, 00:21:12.814 "state": "online", 00:21:12.814 "raid_level": "raid1", 00:21:12.814 "superblock": false, 00:21:12.814 "num_base_bdevs": 3, 00:21:12.814 "num_base_bdevs_discovered": 3, 00:21:12.814 "num_base_bdevs_operational": 3, 00:21:12.814 "base_bdevs_list": [ 00:21:12.814 { 00:21:12.814 "name": "BaseBdev1", 00:21:12.814 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:12.814 "is_configured": true, 00:21:12.814 "data_offset": 0, 00:21:12.814 "data_size": 65536 00:21:12.814 }, 00:21:12.814 { 00:21:12.814 "name": "BaseBdev2", 00:21:12.814 "uuid": "28dbf8ec-4630-4892-90ab-1871fcd0683d", 00:21:12.814 "is_configured": true, 00:21:12.814 "data_offset": 0, 00:21:12.814 "data_size": 65536 00:21:12.814 }, 00:21:12.814 { 00:21:12.814 "name": "BaseBdev3", 00:21:12.814 "uuid": "fb1a22d7-d39d-4475-b15d-c1dccda12487", 00:21:12.814 "is_configured": true, 00:21:12.814 "data_offset": 0, 00:21:12.814 "data_size": 65536 00:21:12.814 } 00:21:12.814 ] 00:21:12.814 }' 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.814 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.086 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.344 [2024-10-07 07:42:12.644351] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.344 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.344 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.344 "name": "Existed_Raid", 00:21:13.344 "aliases": [ 00:21:13.344 "da6f9744-b799-48d1-a97b-a3a60655cee8" 00:21:13.344 ], 00:21:13.344 "product_name": "Raid Volume", 00:21:13.344 "block_size": 512, 00:21:13.344 "num_blocks": 65536, 00:21:13.344 "uuid": "da6f9744-b799-48d1-a97b-a3a60655cee8", 00:21:13.344 "assigned_rate_limits": { 00:21:13.344 "rw_ios_per_sec": 0, 00:21:13.344 "rw_mbytes_per_sec": 0, 00:21:13.344 "r_mbytes_per_sec": 0, 00:21:13.344 "w_mbytes_per_sec": 0 00:21:13.344 }, 00:21:13.344 "claimed": false, 00:21:13.344 "zoned": false, 00:21:13.344 "supported_io_types": { 00:21:13.344 "read": true, 00:21:13.344 "write": true, 00:21:13.344 "unmap": false, 00:21:13.344 "flush": false, 00:21:13.344 "reset": true, 00:21:13.344 "nvme_admin": false, 00:21:13.344 "nvme_io": false, 00:21:13.344 "nvme_io_md": false, 00:21:13.344 "write_zeroes": true, 00:21:13.344 "zcopy": false, 00:21:13.344 "get_zone_info": false, 00:21:13.344 "zone_management": false, 00:21:13.344 "zone_append": false, 00:21:13.344 "compare": false, 00:21:13.344 "compare_and_write": false, 00:21:13.344 "abort": false, 00:21:13.344 "seek_hole": false, 00:21:13.344 "seek_data": false, 00:21:13.344 "copy": false, 00:21:13.344 "nvme_iov_md": false 00:21:13.344 }, 00:21:13.344 "memory_domains": [ 00:21:13.344 { 00:21:13.344 "dma_device_id": "system", 00:21:13.344 "dma_device_type": 1 00:21:13.344 }, 00:21:13.344 { 00:21:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.344 "dma_device_type": 2 00:21:13.344 }, 00:21:13.344 { 00:21:13.344 "dma_device_id": "system", 00:21:13.344 "dma_device_type": 1 00:21:13.344 }, 00:21:13.344 { 00:21:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.344 "dma_device_type": 2 00:21:13.344 }, 00:21:13.344 { 00:21:13.344 "dma_device_id": "system", 00:21:13.344 "dma_device_type": 1 00:21:13.344 }, 00:21:13.344 { 00:21:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.344 "dma_device_type": 2 00:21:13.344 } 00:21:13.344 ], 00:21:13.344 "driver_specific": { 00:21:13.344 "raid": { 00:21:13.344 "uuid": "da6f9744-b799-48d1-a97b-a3a60655cee8", 00:21:13.344 "strip_size_kb": 0, 00:21:13.344 "state": "online", 00:21:13.344 "raid_level": "raid1", 00:21:13.344 "superblock": false, 00:21:13.344 "num_base_bdevs": 3, 00:21:13.344 "num_base_bdevs_discovered": 3, 00:21:13.344 "num_base_bdevs_operational": 3, 00:21:13.344 "base_bdevs_list": [ 00:21:13.345 { 00:21:13.345 "name": "BaseBdev1", 00:21:13.345 "uuid": "24cfc2b0-14f7-4951-8f70-75f028aefe1c", 00:21:13.345 "is_configured": true, 00:21:13.345 "data_offset": 0, 00:21:13.345 "data_size": 65536 00:21:13.345 }, 00:21:13.345 { 00:21:13.345 "name": "BaseBdev2", 00:21:13.345 "uuid": "28dbf8ec-4630-4892-90ab-1871fcd0683d", 00:21:13.345 "is_configured": true, 00:21:13.345 "data_offset": 0, 00:21:13.345 "data_size": 65536 00:21:13.345 }, 00:21:13.345 { 00:21:13.345 "name": "BaseBdev3", 00:21:13.345 "uuid": "fb1a22d7-d39d-4475-b15d-c1dccda12487", 00:21:13.345 "is_configured": true, 00:21:13.345 "data_offset": 0, 00:21:13.345 "data_size": 65536 00:21:13.345 } 00:21:13.345 ] 00:21:13.345 } 00:21:13.345 } 00:21:13.345 }' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:13.345 BaseBdev2 00:21:13.345 BaseBdev3' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.345 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.605 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:13.605 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:13.605 07:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:13.605 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.605 07:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 [2024-10-07 07:42:12.924061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.605 "name": "Existed_Raid", 00:21:13.605 "uuid": "da6f9744-b799-48d1-a97b-a3a60655cee8", 00:21:13.605 "strip_size_kb": 0, 00:21:13.605 "state": "online", 00:21:13.605 "raid_level": "raid1", 00:21:13.605 "superblock": false, 00:21:13.605 "num_base_bdevs": 3, 00:21:13.605 "num_base_bdevs_discovered": 2, 00:21:13.605 "num_base_bdevs_operational": 2, 00:21:13.605 "base_bdevs_list": [ 00:21:13.605 { 00:21:13.605 "name": null, 00:21:13.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.605 "is_configured": false, 00:21:13.605 "data_offset": 0, 00:21:13.605 "data_size": 65536 00:21:13.605 }, 00:21:13.605 { 00:21:13.605 "name": "BaseBdev2", 00:21:13.605 "uuid": "28dbf8ec-4630-4892-90ab-1871fcd0683d", 00:21:13.605 "is_configured": true, 00:21:13.605 "data_offset": 0, 00:21:13.605 "data_size": 65536 00:21:13.605 }, 00:21:13.605 { 00:21:13.605 "name": "BaseBdev3", 00:21:13.605 "uuid": "fb1a22d7-d39d-4475-b15d-c1dccda12487", 00:21:13.605 "is_configured": true, 00:21:13.605 "data_offset": 0, 00:21:13.605 "data_size": 65536 00:21:13.605 } 00:21:13.605 ] 00:21:13.605 }' 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.605 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 [2024-10-07 07:42:13.525207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.172 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 [2024-10-07 07:42:13.677545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:14.172 [2024-10-07 07:42:13.677652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.433 [2024-10-07 07:42:13.779635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.433 [2024-10-07 07:42:13.779696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.433 [2024-10-07 07:42:13.779733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 BaseBdev2 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.433 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 [ 00:21:14.433 { 00:21:14.433 "name": "BaseBdev2", 00:21:14.433 "aliases": [ 00:21:14.433 "7fae7b8b-d591-4af8-a7c1-446f053129db" 00:21:14.433 ], 00:21:14.433 "product_name": "Malloc disk", 00:21:14.433 "block_size": 512, 00:21:14.433 "num_blocks": 65536, 00:21:14.433 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:14.433 "assigned_rate_limits": { 00:21:14.433 "rw_ios_per_sec": 0, 00:21:14.433 "rw_mbytes_per_sec": 0, 00:21:14.433 "r_mbytes_per_sec": 0, 00:21:14.433 "w_mbytes_per_sec": 0 00:21:14.433 }, 00:21:14.433 "claimed": false, 00:21:14.433 "zoned": false, 00:21:14.433 "supported_io_types": { 00:21:14.433 "read": true, 00:21:14.433 "write": true, 00:21:14.433 "unmap": true, 00:21:14.433 "flush": true, 00:21:14.433 "reset": true, 00:21:14.433 "nvme_admin": false, 00:21:14.433 "nvme_io": false, 00:21:14.433 "nvme_io_md": false, 00:21:14.433 "write_zeroes": true, 00:21:14.433 "zcopy": true, 00:21:14.433 "get_zone_info": false, 00:21:14.433 "zone_management": false, 00:21:14.433 "zone_append": false, 00:21:14.433 "compare": false, 00:21:14.433 "compare_and_write": false, 00:21:14.433 "abort": true, 00:21:14.433 "seek_hole": false, 00:21:14.433 "seek_data": false, 00:21:14.433 "copy": true, 00:21:14.433 "nvme_iov_md": false 00:21:14.433 }, 00:21:14.433 "memory_domains": [ 00:21:14.433 { 00:21:14.433 "dma_device_id": "system", 00:21:14.433 "dma_device_type": 1 00:21:14.433 }, 00:21:14.433 { 00:21:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.433 "dma_device_type": 2 00:21:14.433 } 00:21:14.433 ], 00:21:14.434 "driver_specific": {} 00:21:14.434 } 00:21:14.434 ] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.434 BaseBdev3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.434 [ 00:21:14.434 { 00:21:14.434 "name": "BaseBdev3", 00:21:14.434 "aliases": [ 00:21:14.434 "c2f53022-407f-4b66-b952-b4a78667c399" 00:21:14.434 ], 00:21:14.434 "product_name": "Malloc disk", 00:21:14.434 "block_size": 512, 00:21:14.434 "num_blocks": 65536, 00:21:14.434 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:14.434 "assigned_rate_limits": { 00:21:14.434 "rw_ios_per_sec": 0, 00:21:14.434 "rw_mbytes_per_sec": 0, 00:21:14.434 "r_mbytes_per_sec": 0, 00:21:14.434 "w_mbytes_per_sec": 0 00:21:14.434 }, 00:21:14.434 "claimed": false, 00:21:14.434 "zoned": false, 00:21:14.434 "supported_io_types": { 00:21:14.434 "read": true, 00:21:14.434 "write": true, 00:21:14.434 "unmap": true, 00:21:14.434 "flush": true, 00:21:14.434 "reset": true, 00:21:14.434 "nvme_admin": false, 00:21:14.434 "nvme_io": false, 00:21:14.434 "nvme_io_md": false, 00:21:14.434 "write_zeroes": true, 00:21:14.434 "zcopy": true, 00:21:14.434 "get_zone_info": false, 00:21:14.434 "zone_management": false, 00:21:14.434 "zone_append": false, 00:21:14.434 "compare": false, 00:21:14.434 "compare_and_write": false, 00:21:14.434 "abort": true, 00:21:14.434 "seek_hole": false, 00:21:14.434 "seek_data": false, 00:21:14.434 "copy": true, 00:21:14.434 "nvme_iov_md": false 00:21:14.434 }, 00:21:14.434 "memory_domains": [ 00:21:14.434 { 00:21:14.434 "dma_device_id": "system", 00:21:14.434 "dma_device_type": 1 00:21:14.434 }, 00:21:14.434 { 00:21:14.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.434 "dma_device_type": 2 00:21:14.434 } 00:21:14.434 ], 00:21:14.434 "driver_specific": {} 00:21:14.434 } 00:21:14.434 ] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.434 [2024-10-07 07:42:13.977565] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:14.434 [2024-10-07 07:42:13.977772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:14.434 [2024-10-07 07:42:13.977886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.434 [2024-10-07 07:42:13.980403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.434 07:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.694 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.695 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.695 "name": "Existed_Raid", 00:21:14.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.695 "strip_size_kb": 0, 00:21:14.695 "state": "configuring", 00:21:14.695 "raid_level": "raid1", 00:21:14.695 "superblock": false, 00:21:14.695 "num_base_bdevs": 3, 00:21:14.695 "num_base_bdevs_discovered": 2, 00:21:14.695 "num_base_bdevs_operational": 3, 00:21:14.695 "base_bdevs_list": [ 00:21:14.695 { 00:21:14.695 "name": "BaseBdev1", 00:21:14.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.695 "is_configured": false, 00:21:14.695 "data_offset": 0, 00:21:14.695 "data_size": 0 00:21:14.695 }, 00:21:14.695 { 00:21:14.695 "name": "BaseBdev2", 00:21:14.695 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:14.695 "is_configured": true, 00:21:14.695 "data_offset": 0, 00:21:14.695 "data_size": 65536 00:21:14.695 }, 00:21:14.695 { 00:21:14.695 "name": "BaseBdev3", 00:21:14.695 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:14.695 "is_configured": true, 00:21:14.695 "data_offset": 0, 00:21:14.695 "data_size": 65536 00:21:14.695 } 00:21:14.695 ] 00:21:14.695 }' 00:21:14.695 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.695 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.953 [2024-10-07 07:42:14.421639] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.953 "name": "Existed_Raid", 00:21:14.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.953 "strip_size_kb": 0, 00:21:14.953 "state": "configuring", 00:21:14.953 "raid_level": "raid1", 00:21:14.953 "superblock": false, 00:21:14.953 "num_base_bdevs": 3, 00:21:14.953 "num_base_bdevs_discovered": 1, 00:21:14.953 "num_base_bdevs_operational": 3, 00:21:14.953 "base_bdevs_list": [ 00:21:14.953 { 00:21:14.953 "name": "BaseBdev1", 00:21:14.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.953 "is_configured": false, 00:21:14.953 "data_offset": 0, 00:21:14.953 "data_size": 0 00:21:14.953 }, 00:21:14.953 { 00:21:14.953 "name": null, 00:21:14.953 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:14.953 "is_configured": false, 00:21:14.953 "data_offset": 0, 00:21:14.953 "data_size": 65536 00:21:14.953 }, 00:21:14.953 { 00:21:14.953 "name": "BaseBdev3", 00:21:14.953 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:14.953 "is_configured": true, 00:21:14.953 "data_offset": 0, 00:21:14.953 "data_size": 65536 00:21:14.953 } 00:21:14.953 ] 00:21:14.953 }' 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.953 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 [2024-10-07 07:42:14.884774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.521 BaseBdev1 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 [ 00:21:15.521 { 00:21:15.521 "name": "BaseBdev1", 00:21:15.521 "aliases": [ 00:21:15.521 "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55" 00:21:15.521 ], 00:21:15.521 "product_name": "Malloc disk", 00:21:15.521 "block_size": 512, 00:21:15.521 "num_blocks": 65536, 00:21:15.521 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:15.521 "assigned_rate_limits": { 00:21:15.521 "rw_ios_per_sec": 0, 00:21:15.521 "rw_mbytes_per_sec": 0, 00:21:15.521 "r_mbytes_per_sec": 0, 00:21:15.521 "w_mbytes_per_sec": 0 00:21:15.521 }, 00:21:15.521 "claimed": true, 00:21:15.521 "claim_type": "exclusive_write", 00:21:15.521 "zoned": false, 00:21:15.521 "supported_io_types": { 00:21:15.521 "read": true, 00:21:15.521 "write": true, 00:21:15.521 "unmap": true, 00:21:15.521 "flush": true, 00:21:15.521 "reset": true, 00:21:15.521 "nvme_admin": false, 00:21:15.521 "nvme_io": false, 00:21:15.521 "nvme_io_md": false, 00:21:15.521 "write_zeroes": true, 00:21:15.521 "zcopy": true, 00:21:15.521 "get_zone_info": false, 00:21:15.521 "zone_management": false, 00:21:15.521 "zone_append": false, 00:21:15.521 "compare": false, 00:21:15.521 "compare_and_write": false, 00:21:15.521 "abort": true, 00:21:15.521 "seek_hole": false, 00:21:15.521 "seek_data": false, 00:21:15.521 "copy": true, 00:21:15.521 "nvme_iov_md": false 00:21:15.521 }, 00:21:15.521 "memory_domains": [ 00:21:15.521 { 00:21:15.521 "dma_device_id": "system", 00:21:15.521 "dma_device_type": 1 00:21:15.521 }, 00:21:15.521 { 00:21:15.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.521 "dma_device_type": 2 00:21:15.521 } 00:21:15.521 ], 00:21:15.521 "driver_specific": {} 00:21:15.521 } 00:21:15.521 ] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:15.521 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.521 "name": "Existed_Raid", 00:21:15.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.521 "strip_size_kb": 0, 00:21:15.521 "state": "configuring", 00:21:15.521 "raid_level": "raid1", 00:21:15.521 "superblock": false, 00:21:15.521 "num_base_bdevs": 3, 00:21:15.521 "num_base_bdevs_discovered": 2, 00:21:15.521 "num_base_bdevs_operational": 3, 00:21:15.521 "base_bdevs_list": [ 00:21:15.521 { 00:21:15.521 "name": "BaseBdev1", 00:21:15.521 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:15.521 "is_configured": true, 00:21:15.521 "data_offset": 0, 00:21:15.521 "data_size": 65536 00:21:15.521 }, 00:21:15.521 { 00:21:15.521 "name": null, 00:21:15.521 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:15.521 "is_configured": false, 00:21:15.521 "data_offset": 0, 00:21:15.521 "data_size": 65536 00:21:15.521 }, 00:21:15.521 { 00:21:15.521 "name": "BaseBdev3", 00:21:15.521 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:15.521 "is_configured": true, 00:21:15.522 "data_offset": 0, 00:21:15.522 "data_size": 65536 00:21:15.522 } 00:21:15.522 ] 00:21:15.522 }' 00:21:15.522 07:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.522 07:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.780 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.780 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:15.780 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:15.780 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.780 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.039 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:16.039 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:16.039 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.039 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.039 [2024-10-07 07:42:15.352978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:16.039 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.040 "name": "Existed_Raid", 00:21:16.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.040 "strip_size_kb": 0, 00:21:16.040 "state": "configuring", 00:21:16.040 "raid_level": "raid1", 00:21:16.040 "superblock": false, 00:21:16.040 "num_base_bdevs": 3, 00:21:16.040 "num_base_bdevs_discovered": 1, 00:21:16.040 "num_base_bdevs_operational": 3, 00:21:16.040 "base_bdevs_list": [ 00:21:16.040 { 00:21:16.040 "name": "BaseBdev1", 00:21:16.040 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:16.040 "is_configured": true, 00:21:16.040 "data_offset": 0, 00:21:16.040 "data_size": 65536 00:21:16.040 }, 00:21:16.040 { 00:21:16.040 "name": null, 00:21:16.040 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:16.040 "is_configured": false, 00:21:16.040 "data_offset": 0, 00:21:16.040 "data_size": 65536 00:21:16.040 }, 00:21:16.040 { 00:21:16.040 "name": null, 00:21:16.040 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:16.040 "is_configured": false, 00:21:16.040 "data_offset": 0, 00:21:16.040 "data_size": 65536 00:21:16.040 } 00:21:16.040 ] 00:21:16.040 }' 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.040 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.299 [2024-10-07 07:42:15.825069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.299 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.300 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.558 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.558 "name": "Existed_Raid", 00:21:16.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.558 "strip_size_kb": 0, 00:21:16.558 "state": "configuring", 00:21:16.558 "raid_level": "raid1", 00:21:16.558 "superblock": false, 00:21:16.558 "num_base_bdevs": 3, 00:21:16.558 "num_base_bdevs_discovered": 2, 00:21:16.558 "num_base_bdevs_operational": 3, 00:21:16.558 "base_bdevs_list": [ 00:21:16.558 { 00:21:16.558 "name": "BaseBdev1", 00:21:16.558 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 }, 00:21:16.558 { 00:21:16.558 "name": null, 00:21:16.558 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:16.558 "is_configured": false, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 }, 00:21:16.558 { 00:21:16.558 "name": "BaseBdev3", 00:21:16.558 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:16.558 "is_configured": true, 00:21:16.558 "data_offset": 0, 00:21:16.558 "data_size": 65536 00:21:16.558 } 00:21:16.558 ] 00:21:16.558 }' 00:21:16.558 07:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.558 07:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:16.816 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.816 [2024-10-07 07:42:16.321262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.074 "name": "Existed_Raid", 00:21:17.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.074 "strip_size_kb": 0, 00:21:17.074 "state": "configuring", 00:21:17.074 "raid_level": "raid1", 00:21:17.074 "superblock": false, 00:21:17.074 "num_base_bdevs": 3, 00:21:17.074 "num_base_bdevs_discovered": 1, 00:21:17.074 "num_base_bdevs_operational": 3, 00:21:17.074 "base_bdevs_list": [ 00:21:17.074 { 00:21:17.074 "name": null, 00:21:17.074 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:17.074 "is_configured": false, 00:21:17.074 "data_offset": 0, 00:21:17.074 "data_size": 65536 00:21:17.074 }, 00:21:17.074 { 00:21:17.074 "name": null, 00:21:17.074 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:17.074 "is_configured": false, 00:21:17.074 "data_offset": 0, 00:21:17.074 "data_size": 65536 00:21:17.074 }, 00:21:17.074 { 00:21:17.074 "name": "BaseBdev3", 00:21:17.074 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:17.074 "is_configured": true, 00:21:17.074 "data_offset": 0, 00:21:17.074 "data_size": 65536 00:21:17.074 } 00:21:17.074 ] 00:21:17.074 }' 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.074 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 [2024-10-07 07:42:16.949180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.641 07:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.641 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.641 "name": "Existed_Raid", 00:21:17.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.641 "strip_size_kb": 0, 00:21:17.641 "state": "configuring", 00:21:17.641 "raid_level": "raid1", 00:21:17.641 "superblock": false, 00:21:17.641 "num_base_bdevs": 3, 00:21:17.641 "num_base_bdevs_discovered": 2, 00:21:17.641 "num_base_bdevs_operational": 3, 00:21:17.641 "base_bdevs_list": [ 00:21:17.641 { 00:21:17.641 "name": null, 00:21:17.641 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:17.641 "is_configured": false, 00:21:17.641 "data_offset": 0, 00:21:17.641 "data_size": 65536 00:21:17.641 }, 00:21:17.641 { 00:21:17.641 "name": "BaseBdev2", 00:21:17.641 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:17.641 "is_configured": true, 00:21:17.641 "data_offset": 0, 00:21:17.641 "data_size": 65536 00:21:17.641 }, 00:21:17.641 { 00:21:17.641 "name": "BaseBdev3", 00:21:17.641 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:17.641 "is_configured": true, 00:21:17.641 "data_offset": 0, 00:21:17.641 "data_size": 65536 00:21:17.641 } 00:21:17.641 ] 00:21:17.641 }' 00:21:17.641 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.641 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:17.900 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.159 [2024-10-07 07:42:17.527450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:18.159 [2024-10-07 07:42:17.527522] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:18.159 [2024-10-07 07:42:17.527533] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:18.159 [2024-10-07 07:42:17.527861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:18.159 [2024-10-07 07:42:17.528023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:18.159 [2024-10-07 07:42:17.528040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:18.159 [2024-10-07 07:42:17.528320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.159 NewBaseBdev 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.159 [ 00:21:18.159 { 00:21:18.159 "name": "NewBaseBdev", 00:21:18.159 "aliases": [ 00:21:18.159 "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55" 00:21:18.159 ], 00:21:18.159 "product_name": "Malloc disk", 00:21:18.159 "block_size": 512, 00:21:18.159 "num_blocks": 65536, 00:21:18.159 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:18.159 "assigned_rate_limits": { 00:21:18.159 "rw_ios_per_sec": 0, 00:21:18.159 "rw_mbytes_per_sec": 0, 00:21:18.159 "r_mbytes_per_sec": 0, 00:21:18.159 "w_mbytes_per_sec": 0 00:21:18.159 }, 00:21:18.159 "claimed": true, 00:21:18.159 "claim_type": "exclusive_write", 00:21:18.159 "zoned": false, 00:21:18.159 "supported_io_types": { 00:21:18.159 "read": true, 00:21:18.159 "write": true, 00:21:18.159 "unmap": true, 00:21:18.159 "flush": true, 00:21:18.159 "reset": true, 00:21:18.159 "nvme_admin": false, 00:21:18.159 "nvme_io": false, 00:21:18.159 "nvme_io_md": false, 00:21:18.159 "write_zeroes": true, 00:21:18.159 "zcopy": true, 00:21:18.159 "get_zone_info": false, 00:21:18.159 "zone_management": false, 00:21:18.159 "zone_append": false, 00:21:18.159 "compare": false, 00:21:18.159 "compare_and_write": false, 00:21:18.159 "abort": true, 00:21:18.159 "seek_hole": false, 00:21:18.159 "seek_data": false, 00:21:18.159 "copy": true, 00:21:18.159 "nvme_iov_md": false 00:21:18.159 }, 00:21:18.159 "memory_domains": [ 00:21:18.159 { 00:21:18.159 "dma_device_id": "system", 00:21:18.159 "dma_device_type": 1 00:21:18.159 }, 00:21:18.159 { 00:21:18.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.159 "dma_device_type": 2 00:21:18.159 } 00:21:18.159 ], 00:21:18.159 "driver_specific": {} 00:21:18.159 } 00:21:18.159 ] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.159 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.159 "name": "Existed_Raid", 00:21:18.159 "uuid": "0352180f-6cac-4d67-acbf-24051fe0e3d6", 00:21:18.159 "strip_size_kb": 0, 00:21:18.159 "state": "online", 00:21:18.159 "raid_level": "raid1", 00:21:18.160 "superblock": false, 00:21:18.160 "num_base_bdevs": 3, 00:21:18.160 "num_base_bdevs_discovered": 3, 00:21:18.160 "num_base_bdevs_operational": 3, 00:21:18.160 "base_bdevs_list": [ 00:21:18.160 { 00:21:18.160 "name": "NewBaseBdev", 00:21:18.160 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:18.160 "is_configured": true, 00:21:18.160 "data_offset": 0, 00:21:18.160 "data_size": 65536 00:21:18.160 }, 00:21:18.160 { 00:21:18.160 "name": "BaseBdev2", 00:21:18.160 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:18.160 "is_configured": true, 00:21:18.160 "data_offset": 0, 00:21:18.160 "data_size": 65536 00:21:18.160 }, 00:21:18.160 { 00:21:18.160 "name": "BaseBdev3", 00:21:18.160 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:18.160 "is_configured": true, 00:21:18.160 "data_offset": 0, 00:21:18.160 "data_size": 65536 00:21:18.160 } 00:21:18.160 ] 00:21:18.160 }' 00:21:18.160 07:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.160 07:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.726 [2024-10-07 07:42:18.032035] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.726 "name": "Existed_Raid", 00:21:18.726 "aliases": [ 00:21:18.726 "0352180f-6cac-4d67-acbf-24051fe0e3d6" 00:21:18.726 ], 00:21:18.726 "product_name": "Raid Volume", 00:21:18.726 "block_size": 512, 00:21:18.726 "num_blocks": 65536, 00:21:18.726 "uuid": "0352180f-6cac-4d67-acbf-24051fe0e3d6", 00:21:18.726 "assigned_rate_limits": { 00:21:18.726 "rw_ios_per_sec": 0, 00:21:18.726 "rw_mbytes_per_sec": 0, 00:21:18.726 "r_mbytes_per_sec": 0, 00:21:18.726 "w_mbytes_per_sec": 0 00:21:18.726 }, 00:21:18.726 "claimed": false, 00:21:18.726 "zoned": false, 00:21:18.726 "supported_io_types": { 00:21:18.726 "read": true, 00:21:18.726 "write": true, 00:21:18.726 "unmap": false, 00:21:18.726 "flush": false, 00:21:18.726 "reset": true, 00:21:18.726 "nvme_admin": false, 00:21:18.726 "nvme_io": false, 00:21:18.726 "nvme_io_md": false, 00:21:18.726 "write_zeroes": true, 00:21:18.726 "zcopy": false, 00:21:18.726 "get_zone_info": false, 00:21:18.726 "zone_management": false, 00:21:18.726 "zone_append": false, 00:21:18.726 "compare": false, 00:21:18.726 "compare_and_write": false, 00:21:18.726 "abort": false, 00:21:18.726 "seek_hole": false, 00:21:18.726 "seek_data": false, 00:21:18.726 "copy": false, 00:21:18.726 "nvme_iov_md": false 00:21:18.726 }, 00:21:18.726 "memory_domains": [ 00:21:18.726 { 00:21:18.726 "dma_device_id": "system", 00:21:18.726 "dma_device_type": 1 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.726 "dma_device_type": 2 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "dma_device_id": "system", 00:21:18.726 "dma_device_type": 1 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.726 "dma_device_type": 2 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "dma_device_id": "system", 00:21:18.726 "dma_device_type": 1 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.726 "dma_device_type": 2 00:21:18.726 } 00:21:18.726 ], 00:21:18.726 "driver_specific": { 00:21:18.726 "raid": { 00:21:18.726 "uuid": "0352180f-6cac-4d67-acbf-24051fe0e3d6", 00:21:18.726 "strip_size_kb": 0, 00:21:18.726 "state": "online", 00:21:18.726 "raid_level": "raid1", 00:21:18.726 "superblock": false, 00:21:18.726 "num_base_bdevs": 3, 00:21:18.726 "num_base_bdevs_discovered": 3, 00:21:18.726 "num_base_bdevs_operational": 3, 00:21:18.726 "base_bdevs_list": [ 00:21:18.726 { 00:21:18.726 "name": "NewBaseBdev", 00:21:18.726 "uuid": "18ec0a30-8cb3-47e5-8a6d-97f7a47e5a55", 00:21:18.726 "is_configured": true, 00:21:18.726 "data_offset": 0, 00:21:18.726 "data_size": 65536 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "name": "BaseBdev2", 00:21:18.726 "uuid": "7fae7b8b-d591-4af8-a7c1-446f053129db", 00:21:18.726 "is_configured": true, 00:21:18.726 "data_offset": 0, 00:21:18.726 "data_size": 65536 00:21:18.726 }, 00:21:18.726 { 00:21:18.726 "name": "BaseBdev3", 00:21:18.726 "uuid": "c2f53022-407f-4b66-b952-b4a78667c399", 00:21:18.726 "is_configured": true, 00:21:18.726 "data_offset": 0, 00:21:18.726 "data_size": 65536 00:21:18.726 } 00:21:18.726 ] 00:21:18.726 } 00:21:18.726 } 00:21:18.726 }' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:18.726 BaseBdev2 00:21:18.726 BaseBdev3' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.726 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.727 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.984 [2024-10-07 07:42:18.311787] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:18.984 [2024-10-07 07:42:18.311831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.984 [2024-10-07 07:42:18.311927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.984 [2024-10-07 07:42:18.312254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.984 [2024-10-07 07:42:18.312268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67501 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 67501 ']' 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 67501 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 67501 00:21:18.984 killing process with pid 67501 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 67501' 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 67501 00:21:18.984 [2024-10-07 07:42:18.363485] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.984 07:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 67501 00:21:19.243 [2024-10-07 07:42:18.715411] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:21.143 00:21:21.143 real 0m11.305s 00:21:21.143 user 0m17.647s 00:21:21.143 sys 0m2.066s 00:21:21.143 ************************************ 00:21:21.143 END TEST raid_state_function_test 00:21:21.143 ************************************ 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.143 07:42:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:21:21.143 07:42:20 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:21.143 07:42:20 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:21.143 07:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.143 ************************************ 00:21:21.143 START TEST raid_state_function_test_sb 00:21:21.143 ************************************ 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 3 true 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68128 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68128' 00:21:21.143 Process raid pid: 68128 00:21:21.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68128 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 68128 ']' 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.143 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:21.144 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.144 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:21.144 07:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.144 [2024-10-07 07:42:20.412369] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:21.144 [2024-10-07 07:42:20.412544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.144 [2024-10-07 07:42:20.597467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.402 [2024-10-07 07:42:20.863487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.660 [2024-10-07 07:42:21.101366] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.660 [2024-10-07 07:42:21.101414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 [2024-10-07 07:42:21.337004] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:21.919 [2024-10-07 07:42:21.337067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:21.919 [2024-10-07 07:42:21.337081] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:21.919 [2024-10-07 07:42:21.337098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:21.919 [2024-10-07 07:42:21.337107] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:21.919 [2024-10-07 07:42:21.337121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:21.919 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.919 "name": "Existed_Raid", 00:21:21.919 "uuid": "aa3e4c6b-c517-49b4-8d36-556726727e53", 00:21:21.919 "strip_size_kb": 0, 00:21:21.919 "state": "configuring", 00:21:21.919 "raid_level": "raid1", 00:21:21.919 "superblock": true, 00:21:21.919 "num_base_bdevs": 3, 00:21:21.919 "num_base_bdevs_discovered": 0, 00:21:21.920 "num_base_bdevs_operational": 3, 00:21:21.920 "base_bdevs_list": [ 00:21:21.920 { 00:21:21.920 "name": "BaseBdev1", 00:21:21.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.920 "is_configured": false, 00:21:21.920 "data_offset": 0, 00:21:21.920 "data_size": 0 00:21:21.920 }, 00:21:21.920 { 00:21:21.920 "name": "BaseBdev2", 00:21:21.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.920 "is_configured": false, 00:21:21.920 "data_offset": 0, 00:21:21.920 "data_size": 0 00:21:21.920 }, 00:21:21.920 { 00:21:21.920 "name": "BaseBdev3", 00:21:21.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.920 "is_configured": false, 00:21:21.920 "data_offset": 0, 00:21:21.920 "data_size": 0 00:21:21.920 } 00:21:21.920 ] 00:21:21.920 }' 00:21:21.920 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.920 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 [2024-10-07 07:42:21.753001] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.490 [2024-10-07 07:42:21.753231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 [2024-10-07 07:42:21.761022] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:22.490 [2024-10-07 07:42:21.761077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:22.490 [2024-10-07 07:42:21.761089] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:22.490 [2024-10-07 07:42:21.761104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:22.490 [2024-10-07 07:42:21.761113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:22.490 [2024-10-07 07:42:21.761127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 [2024-10-07 07:42:21.819850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.490 BaseBdev1 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.490 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.490 [ 00:21:22.490 { 00:21:22.490 "name": "BaseBdev1", 00:21:22.490 "aliases": [ 00:21:22.490 "529cf520-cd8a-48a2-a304-284c5040e3b2" 00:21:22.491 ], 00:21:22.491 "product_name": "Malloc disk", 00:21:22.491 "block_size": 512, 00:21:22.491 "num_blocks": 65536, 00:21:22.491 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:22.491 "assigned_rate_limits": { 00:21:22.491 "rw_ios_per_sec": 0, 00:21:22.491 "rw_mbytes_per_sec": 0, 00:21:22.491 "r_mbytes_per_sec": 0, 00:21:22.491 "w_mbytes_per_sec": 0 00:21:22.491 }, 00:21:22.491 "claimed": true, 00:21:22.491 "claim_type": "exclusive_write", 00:21:22.491 "zoned": false, 00:21:22.491 "supported_io_types": { 00:21:22.491 "read": true, 00:21:22.491 "write": true, 00:21:22.491 "unmap": true, 00:21:22.491 "flush": true, 00:21:22.491 "reset": true, 00:21:22.491 "nvme_admin": false, 00:21:22.491 "nvme_io": false, 00:21:22.491 "nvme_io_md": false, 00:21:22.491 "write_zeroes": true, 00:21:22.491 "zcopy": true, 00:21:22.491 "get_zone_info": false, 00:21:22.491 "zone_management": false, 00:21:22.491 "zone_append": false, 00:21:22.491 "compare": false, 00:21:22.491 "compare_and_write": false, 00:21:22.491 "abort": true, 00:21:22.491 "seek_hole": false, 00:21:22.491 "seek_data": false, 00:21:22.491 "copy": true, 00:21:22.491 "nvme_iov_md": false 00:21:22.491 }, 00:21:22.491 "memory_domains": [ 00:21:22.491 { 00:21:22.491 "dma_device_id": "system", 00:21:22.491 "dma_device_type": 1 00:21:22.491 }, 00:21:22.491 { 00:21:22.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.491 "dma_device_type": 2 00:21:22.491 } 00:21:22.491 ], 00:21:22.491 "driver_specific": {} 00:21:22.491 } 00:21:22.491 ] 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.491 "name": "Existed_Raid", 00:21:22.491 "uuid": "65bc802d-813a-4b3a-a659-ecadd215ad2d", 00:21:22.491 "strip_size_kb": 0, 00:21:22.491 "state": "configuring", 00:21:22.491 "raid_level": "raid1", 00:21:22.491 "superblock": true, 00:21:22.491 "num_base_bdevs": 3, 00:21:22.491 "num_base_bdevs_discovered": 1, 00:21:22.491 "num_base_bdevs_operational": 3, 00:21:22.491 "base_bdevs_list": [ 00:21:22.491 { 00:21:22.491 "name": "BaseBdev1", 00:21:22.491 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:22.491 "is_configured": true, 00:21:22.491 "data_offset": 2048, 00:21:22.491 "data_size": 63488 00:21:22.491 }, 00:21:22.491 { 00:21:22.491 "name": "BaseBdev2", 00:21:22.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.491 "is_configured": false, 00:21:22.491 "data_offset": 0, 00:21:22.491 "data_size": 0 00:21:22.491 }, 00:21:22.491 { 00:21:22.491 "name": "BaseBdev3", 00:21:22.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.491 "is_configured": false, 00:21:22.491 "data_offset": 0, 00:21:22.491 "data_size": 0 00:21:22.491 } 00:21:22.491 ] 00:21:22.491 }' 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.491 07:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.751 [2024-10-07 07:42:22.244030] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.751 [2024-10-07 07:42:22.244094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.751 [2024-10-07 07:42:22.252087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.751 [2024-10-07 07:42:22.254497] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:22.751 [2024-10-07 07:42:22.254703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:22.751 [2024-10-07 07:42:22.254746] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:22.751 [2024-10-07 07:42:22.254763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.751 "name": "Existed_Raid", 00:21:22.751 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:22.751 "strip_size_kb": 0, 00:21:22.751 "state": "configuring", 00:21:22.751 "raid_level": "raid1", 00:21:22.751 "superblock": true, 00:21:22.751 "num_base_bdevs": 3, 00:21:22.751 "num_base_bdevs_discovered": 1, 00:21:22.751 "num_base_bdevs_operational": 3, 00:21:22.751 "base_bdevs_list": [ 00:21:22.751 { 00:21:22.751 "name": "BaseBdev1", 00:21:22.751 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:22.751 "is_configured": true, 00:21:22.751 "data_offset": 2048, 00:21:22.751 "data_size": 63488 00:21:22.751 }, 00:21:22.751 { 00:21:22.751 "name": "BaseBdev2", 00:21:22.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.751 "is_configured": false, 00:21:22.751 "data_offset": 0, 00:21:22.751 "data_size": 0 00:21:22.751 }, 00:21:22.751 { 00:21:22.751 "name": "BaseBdev3", 00:21:22.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.751 "is_configured": false, 00:21:22.751 "data_offset": 0, 00:21:22.751 "data_size": 0 00:21:22.751 } 00:21:22.751 ] 00:21:22.751 }' 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.751 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.319 [2024-10-07 07:42:22.700169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.319 BaseBdev2 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.319 [ 00:21:23.319 { 00:21:23.319 "name": "BaseBdev2", 00:21:23.319 "aliases": [ 00:21:23.319 "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655" 00:21:23.319 ], 00:21:23.319 "product_name": "Malloc disk", 00:21:23.319 "block_size": 512, 00:21:23.319 "num_blocks": 65536, 00:21:23.319 "uuid": "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655", 00:21:23.319 "assigned_rate_limits": { 00:21:23.319 "rw_ios_per_sec": 0, 00:21:23.319 "rw_mbytes_per_sec": 0, 00:21:23.319 "r_mbytes_per_sec": 0, 00:21:23.319 "w_mbytes_per_sec": 0 00:21:23.319 }, 00:21:23.319 "claimed": true, 00:21:23.319 "claim_type": "exclusive_write", 00:21:23.319 "zoned": false, 00:21:23.319 "supported_io_types": { 00:21:23.319 "read": true, 00:21:23.319 "write": true, 00:21:23.319 "unmap": true, 00:21:23.319 "flush": true, 00:21:23.319 "reset": true, 00:21:23.319 "nvme_admin": false, 00:21:23.319 "nvme_io": false, 00:21:23.319 "nvme_io_md": false, 00:21:23.319 "write_zeroes": true, 00:21:23.319 "zcopy": true, 00:21:23.319 "get_zone_info": false, 00:21:23.319 "zone_management": false, 00:21:23.319 "zone_append": false, 00:21:23.319 "compare": false, 00:21:23.319 "compare_and_write": false, 00:21:23.319 "abort": true, 00:21:23.319 "seek_hole": false, 00:21:23.319 "seek_data": false, 00:21:23.319 "copy": true, 00:21:23.319 "nvme_iov_md": false 00:21:23.319 }, 00:21:23.319 "memory_domains": [ 00:21:23.319 { 00:21:23.319 "dma_device_id": "system", 00:21:23.319 "dma_device_type": 1 00:21:23.319 }, 00:21:23.319 { 00:21:23.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.319 "dma_device_type": 2 00:21:23.319 } 00:21:23.319 ], 00:21:23.319 "driver_specific": {} 00:21:23.319 } 00:21:23.319 ] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.319 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.319 "name": "Existed_Raid", 00:21:23.319 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:23.319 "strip_size_kb": 0, 00:21:23.319 "state": "configuring", 00:21:23.319 "raid_level": "raid1", 00:21:23.319 "superblock": true, 00:21:23.319 "num_base_bdevs": 3, 00:21:23.319 "num_base_bdevs_discovered": 2, 00:21:23.319 "num_base_bdevs_operational": 3, 00:21:23.319 "base_bdevs_list": [ 00:21:23.319 { 00:21:23.319 "name": "BaseBdev1", 00:21:23.319 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:23.319 "is_configured": true, 00:21:23.319 "data_offset": 2048, 00:21:23.319 "data_size": 63488 00:21:23.319 }, 00:21:23.319 { 00:21:23.319 "name": "BaseBdev2", 00:21:23.319 "uuid": "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655", 00:21:23.319 "is_configured": true, 00:21:23.319 "data_offset": 2048, 00:21:23.319 "data_size": 63488 00:21:23.319 }, 00:21:23.319 { 00:21:23.319 "name": "BaseBdev3", 00:21:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.319 "is_configured": false, 00:21:23.319 "data_offset": 0, 00:21:23.320 "data_size": 0 00:21:23.320 } 00:21:23.320 ] 00:21:23.320 }' 00:21:23.320 07:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.320 07:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.887 [2024-10-07 07:42:23.183836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:23.887 [2024-10-07 07:42:23.184129] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:23.887 [2024-10-07 07:42:23.184167] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:23.887 [2024-10-07 07:42:23.184479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:23.887 BaseBdev3 00:21:23.887 [2024-10-07 07:42:23.184680] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:23.887 [2024-10-07 07:42:23.184697] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:23.887 [2024-10-07 07:42:23.184882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.887 [ 00:21:23.887 { 00:21:23.887 "name": "BaseBdev3", 00:21:23.887 "aliases": [ 00:21:23.887 "7c634f1f-d0bf-42dd-9408-1191878d087b" 00:21:23.887 ], 00:21:23.887 "product_name": "Malloc disk", 00:21:23.887 "block_size": 512, 00:21:23.887 "num_blocks": 65536, 00:21:23.887 "uuid": "7c634f1f-d0bf-42dd-9408-1191878d087b", 00:21:23.887 "assigned_rate_limits": { 00:21:23.887 "rw_ios_per_sec": 0, 00:21:23.887 "rw_mbytes_per_sec": 0, 00:21:23.887 "r_mbytes_per_sec": 0, 00:21:23.887 "w_mbytes_per_sec": 0 00:21:23.887 }, 00:21:23.887 "claimed": true, 00:21:23.887 "claim_type": "exclusive_write", 00:21:23.887 "zoned": false, 00:21:23.887 "supported_io_types": { 00:21:23.887 "read": true, 00:21:23.887 "write": true, 00:21:23.887 "unmap": true, 00:21:23.887 "flush": true, 00:21:23.887 "reset": true, 00:21:23.887 "nvme_admin": false, 00:21:23.887 "nvme_io": false, 00:21:23.887 "nvme_io_md": false, 00:21:23.887 "write_zeroes": true, 00:21:23.887 "zcopy": true, 00:21:23.887 "get_zone_info": false, 00:21:23.887 "zone_management": false, 00:21:23.887 "zone_append": false, 00:21:23.887 "compare": false, 00:21:23.887 "compare_and_write": false, 00:21:23.887 "abort": true, 00:21:23.887 "seek_hole": false, 00:21:23.887 "seek_data": false, 00:21:23.887 "copy": true, 00:21:23.887 "nvme_iov_md": false 00:21:23.887 }, 00:21:23.887 "memory_domains": [ 00:21:23.887 { 00:21:23.887 "dma_device_id": "system", 00:21:23.887 "dma_device_type": 1 00:21:23.887 }, 00:21:23.887 { 00:21:23.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.887 "dma_device_type": 2 00:21:23.887 } 00:21:23.887 ], 00:21:23.887 "driver_specific": {} 00:21:23.887 } 00:21:23.887 ] 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.887 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.888 "name": "Existed_Raid", 00:21:23.888 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:23.888 "strip_size_kb": 0, 00:21:23.888 "state": "online", 00:21:23.888 "raid_level": "raid1", 00:21:23.888 "superblock": true, 00:21:23.888 "num_base_bdevs": 3, 00:21:23.888 "num_base_bdevs_discovered": 3, 00:21:23.888 "num_base_bdevs_operational": 3, 00:21:23.888 "base_bdevs_list": [ 00:21:23.888 { 00:21:23.888 "name": "BaseBdev1", 00:21:23.888 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:23.888 "is_configured": true, 00:21:23.888 "data_offset": 2048, 00:21:23.888 "data_size": 63488 00:21:23.888 }, 00:21:23.888 { 00:21:23.888 "name": "BaseBdev2", 00:21:23.888 "uuid": "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655", 00:21:23.888 "is_configured": true, 00:21:23.888 "data_offset": 2048, 00:21:23.888 "data_size": 63488 00:21:23.888 }, 00:21:23.888 { 00:21:23.888 "name": "BaseBdev3", 00:21:23.888 "uuid": "7c634f1f-d0bf-42dd-9408-1191878d087b", 00:21:23.888 "is_configured": true, 00:21:23.888 "data_offset": 2048, 00:21:23.888 "data_size": 63488 00:21:23.888 } 00:21:23.888 ] 00:21:23.888 }' 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.888 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:24.147 [2024-10-07 07:42:23.640580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.147 "name": "Existed_Raid", 00:21:24.147 "aliases": [ 00:21:24.147 "d4422de1-dfdf-4f59-8b25-2fe9694d4d07" 00:21:24.147 ], 00:21:24.147 "product_name": "Raid Volume", 00:21:24.147 "block_size": 512, 00:21:24.147 "num_blocks": 63488, 00:21:24.147 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:24.147 "assigned_rate_limits": { 00:21:24.147 "rw_ios_per_sec": 0, 00:21:24.147 "rw_mbytes_per_sec": 0, 00:21:24.147 "r_mbytes_per_sec": 0, 00:21:24.147 "w_mbytes_per_sec": 0 00:21:24.147 }, 00:21:24.147 "claimed": false, 00:21:24.147 "zoned": false, 00:21:24.147 "supported_io_types": { 00:21:24.147 "read": true, 00:21:24.147 "write": true, 00:21:24.147 "unmap": false, 00:21:24.147 "flush": false, 00:21:24.147 "reset": true, 00:21:24.147 "nvme_admin": false, 00:21:24.147 "nvme_io": false, 00:21:24.147 "nvme_io_md": false, 00:21:24.147 "write_zeroes": true, 00:21:24.147 "zcopy": false, 00:21:24.147 "get_zone_info": false, 00:21:24.147 "zone_management": false, 00:21:24.147 "zone_append": false, 00:21:24.147 "compare": false, 00:21:24.147 "compare_and_write": false, 00:21:24.147 "abort": false, 00:21:24.147 "seek_hole": false, 00:21:24.147 "seek_data": false, 00:21:24.147 "copy": false, 00:21:24.147 "nvme_iov_md": false 00:21:24.147 }, 00:21:24.147 "memory_domains": [ 00:21:24.147 { 00:21:24.147 "dma_device_id": "system", 00:21:24.147 "dma_device_type": 1 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.147 "dma_device_type": 2 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "dma_device_id": "system", 00:21:24.147 "dma_device_type": 1 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.147 "dma_device_type": 2 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "dma_device_id": "system", 00:21:24.147 "dma_device_type": 1 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.147 "dma_device_type": 2 00:21:24.147 } 00:21:24.147 ], 00:21:24.147 "driver_specific": { 00:21:24.147 "raid": { 00:21:24.147 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:24.147 "strip_size_kb": 0, 00:21:24.147 "state": "online", 00:21:24.147 "raid_level": "raid1", 00:21:24.147 "superblock": true, 00:21:24.147 "num_base_bdevs": 3, 00:21:24.147 "num_base_bdevs_discovered": 3, 00:21:24.147 "num_base_bdevs_operational": 3, 00:21:24.147 "base_bdevs_list": [ 00:21:24.147 { 00:21:24.147 "name": "BaseBdev1", 00:21:24.147 "uuid": "529cf520-cd8a-48a2-a304-284c5040e3b2", 00:21:24.147 "is_configured": true, 00:21:24.147 "data_offset": 2048, 00:21:24.147 "data_size": 63488 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "name": "BaseBdev2", 00:21:24.147 "uuid": "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655", 00:21:24.147 "is_configured": true, 00:21:24.147 "data_offset": 2048, 00:21:24.147 "data_size": 63488 00:21:24.147 }, 00:21:24.147 { 00:21:24.147 "name": "BaseBdev3", 00:21:24.147 "uuid": "7c634f1f-d0bf-42dd-9408-1191878d087b", 00:21:24.147 "is_configured": true, 00:21:24.147 "data_offset": 2048, 00:21:24.147 "data_size": 63488 00:21:24.147 } 00:21:24.147 ] 00:21:24.147 } 00:21:24.147 } 00:21:24.147 }' 00:21:24.147 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:24.406 BaseBdev2 00:21:24.406 BaseBdev3' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.406 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.407 07:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.407 [2024-10-07 07:42:23.932155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.665 "name": "Existed_Raid", 00:21:24.665 "uuid": "d4422de1-dfdf-4f59-8b25-2fe9694d4d07", 00:21:24.665 "strip_size_kb": 0, 00:21:24.665 "state": "online", 00:21:24.665 "raid_level": "raid1", 00:21:24.665 "superblock": true, 00:21:24.665 "num_base_bdevs": 3, 00:21:24.665 "num_base_bdevs_discovered": 2, 00:21:24.665 "num_base_bdevs_operational": 2, 00:21:24.665 "base_bdevs_list": [ 00:21:24.665 { 00:21:24.665 "name": null, 00:21:24.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.665 "is_configured": false, 00:21:24.665 "data_offset": 0, 00:21:24.665 "data_size": 63488 00:21:24.665 }, 00:21:24.665 { 00:21:24.665 "name": "BaseBdev2", 00:21:24.665 "uuid": "d1ff82f6-eaaa-41f8-9b3a-3b7be11c4655", 00:21:24.665 "is_configured": true, 00:21:24.665 "data_offset": 2048, 00:21:24.665 "data_size": 63488 00:21:24.665 }, 00:21:24.665 { 00:21:24.665 "name": "BaseBdev3", 00:21:24.665 "uuid": "7c634f1f-d0bf-42dd-9408-1191878d087b", 00:21:24.665 "is_configured": true, 00:21:24.665 "data_offset": 2048, 00:21:24.665 "data_size": 63488 00:21:24.665 } 00:21:24.665 ] 00:21:24.665 }' 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.665 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:24.923 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.182 [2024-10-07 07:42:24.486943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.182 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.182 [2024-10-07 07:42:24.644111] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:25.182 [2024-10-07 07:42:24.644240] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.441 [2024-10-07 07:42:24.750541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.442 [2024-10-07 07:42:24.750596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.442 [2024-10-07 07:42:24.750611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 BaseBdev2 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 [ 00:21:25.442 { 00:21:25.442 "name": "BaseBdev2", 00:21:25.442 "aliases": [ 00:21:25.442 "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5" 00:21:25.442 ], 00:21:25.442 "product_name": "Malloc disk", 00:21:25.442 "block_size": 512, 00:21:25.442 "num_blocks": 65536, 00:21:25.442 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:25.442 "assigned_rate_limits": { 00:21:25.442 "rw_ios_per_sec": 0, 00:21:25.442 "rw_mbytes_per_sec": 0, 00:21:25.442 "r_mbytes_per_sec": 0, 00:21:25.442 "w_mbytes_per_sec": 0 00:21:25.442 }, 00:21:25.442 "claimed": false, 00:21:25.442 "zoned": false, 00:21:25.442 "supported_io_types": { 00:21:25.442 "read": true, 00:21:25.442 "write": true, 00:21:25.442 "unmap": true, 00:21:25.442 "flush": true, 00:21:25.442 "reset": true, 00:21:25.442 "nvme_admin": false, 00:21:25.442 "nvme_io": false, 00:21:25.442 "nvme_io_md": false, 00:21:25.442 "write_zeroes": true, 00:21:25.442 "zcopy": true, 00:21:25.442 "get_zone_info": false, 00:21:25.442 "zone_management": false, 00:21:25.442 "zone_append": false, 00:21:25.442 "compare": false, 00:21:25.442 "compare_and_write": false, 00:21:25.442 "abort": true, 00:21:25.442 "seek_hole": false, 00:21:25.442 "seek_data": false, 00:21:25.442 "copy": true, 00:21:25.442 "nvme_iov_md": false 00:21:25.442 }, 00:21:25.442 "memory_domains": [ 00:21:25.442 { 00:21:25.442 "dma_device_id": "system", 00:21:25.442 "dma_device_type": 1 00:21:25.442 }, 00:21:25.442 { 00:21:25.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.442 "dma_device_type": 2 00:21:25.442 } 00:21:25.442 ], 00:21:25.442 "driver_specific": {} 00:21:25.442 } 00:21:25.442 ] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 BaseBdev3 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 [ 00:21:25.442 { 00:21:25.442 "name": "BaseBdev3", 00:21:25.442 "aliases": [ 00:21:25.442 "cd40bedd-3706-4198-bf06-412616c455f7" 00:21:25.442 ], 00:21:25.442 "product_name": "Malloc disk", 00:21:25.442 "block_size": 512, 00:21:25.442 "num_blocks": 65536, 00:21:25.442 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:25.442 "assigned_rate_limits": { 00:21:25.442 "rw_ios_per_sec": 0, 00:21:25.442 "rw_mbytes_per_sec": 0, 00:21:25.442 "r_mbytes_per_sec": 0, 00:21:25.442 "w_mbytes_per_sec": 0 00:21:25.442 }, 00:21:25.442 "claimed": false, 00:21:25.442 "zoned": false, 00:21:25.442 "supported_io_types": { 00:21:25.442 "read": true, 00:21:25.442 "write": true, 00:21:25.442 "unmap": true, 00:21:25.442 "flush": true, 00:21:25.442 "reset": true, 00:21:25.442 "nvme_admin": false, 00:21:25.442 "nvme_io": false, 00:21:25.442 "nvme_io_md": false, 00:21:25.442 "write_zeroes": true, 00:21:25.442 "zcopy": true, 00:21:25.442 "get_zone_info": false, 00:21:25.442 "zone_management": false, 00:21:25.442 "zone_append": false, 00:21:25.442 "compare": false, 00:21:25.442 "compare_and_write": false, 00:21:25.442 "abort": true, 00:21:25.442 "seek_hole": false, 00:21:25.442 "seek_data": false, 00:21:25.442 "copy": true, 00:21:25.442 "nvme_iov_md": false 00:21:25.442 }, 00:21:25.442 "memory_domains": [ 00:21:25.442 { 00:21:25.442 "dma_device_id": "system", 00:21:25.442 "dma_device_type": 1 00:21:25.442 }, 00:21:25.442 { 00:21:25.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.442 "dma_device_type": 2 00:21:25.442 } 00:21:25.442 ], 00:21:25.442 "driver_specific": {} 00:21:25.442 } 00:21:25.442 ] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.442 [2024-10-07 07:42:24.951878] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.442 [2024-10-07 07:42:24.953166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.442 [2024-10-07 07:42:24.953208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.442 [2024-10-07 07:42:24.955445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.442 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.443 "name": "Existed_Raid", 00:21:25.443 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:25.443 "strip_size_kb": 0, 00:21:25.443 "state": "configuring", 00:21:25.443 "raid_level": "raid1", 00:21:25.443 "superblock": true, 00:21:25.443 "num_base_bdevs": 3, 00:21:25.443 "num_base_bdevs_discovered": 2, 00:21:25.443 "num_base_bdevs_operational": 3, 00:21:25.443 "base_bdevs_list": [ 00:21:25.443 { 00:21:25.443 "name": "BaseBdev1", 00:21:25.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.443 "is_configured": false, 00:21:25.443 "data_offset": 0, 00:21:25.443 "data_size": 0 00:21:25.443 }, 00:21:25.443 { 00:21:25.443 "name": "BaseBdev2", 00:21:25.443 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:25.443 "is_configured": true, 00:21:25.443 "data_offset": 2048, 00:21:25.443 "data_size": 63488 00:21:25.443 }, 00:21:25.443 { 00:21:25.443 "name": "BaseBdev3", 00:21:25.443 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:25.443 "is_configured": true, 00:21:25.443 "data_offset": 2048, 00:21:25.443 "data_size": 63488 00:21:25.443 } 00:21:25.443 ] 00:21:25.443 }' 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.443 07:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.057 [2024-10-07 07:42:25.383990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.057 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.057 "name": "Existed_Raid", 00:21:26.057 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:26.057 "strip_size_kb": 0, 00:21:26.057 "state": "configuring", 00:21:26.057 "raid_level": "raid1", 00:21:26.057 "superblock": true, 00:21:26.057 "num_base_bdevs": 3, 00:21:26.057 "num_base_bdevs_discovered": 1, 00:21:26.057 "num_base_bdevs_operational": 3, 00:21:26.057 "base_bdevs_list": [ 00:21:26.057 { 00:21:26.058 "name": "BaseBdev1", 00:21:26.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.058 "is_configured": false, 00:21:26.058 "data_offset": 0, 00:21:26.058 "data_size": 0 00:21:26.058 }, 00:21:26.058 { 00:21:26.058 "name": null, 00:21:26.058 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:26.058 "is_configured": false, 00:21:26.058 "data_offset": 0, 00:21:26.058 "data_size": 63488 00:21:26.058 }, 00:21:26.058 { 00:21:26.058 "name": "BaseBdev3", 00:21:26.058 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:26.058 "is_configured": true, 00:21:26.058 "data_offset": 2048, 00:21:26.058 "data_size": 63488 00:21:26.058 } 00:21:26.058 ] 00:21:26.058 }' 00:21:26.058 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.058 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:26.316 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.575 [2024-10-07 07:42:25.917642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.575 BaseBdev1 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.575 [ 00:21:26.575 { 00:21:26.575 "name": "BaseBdev1", 00:21:26.575 "aliases": [ 00:21:26.575 "c02cae82-ab3b-4ac0-9b3d-f54611892011" 00:21:26.575 ], 00:21:26.575 "product_name": "Malloc disk", 00:21:26.575 "block_size": 512, 00:21:26.575 "num_blocks": 65536, 00:21:26.575 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:26.575 "assigned_rate_limits": { 00:21:26.575 "rw_ios_per_sec": 0, 00:21:26.575 "rw_mbytes_per_sec": 0, 00:21:26.575 "r_mbytes_per_sec": 0, 00:21:26.575 "w_mbytes_per_sec": 0 00:21:26.575 }, 00:21:26.575 "claimed": true, 00:21:26.575 "claim_type": "exclusive_write", 00:21:26.575 "zoned": false, 00:21:26.575 "supported_io_types": { 00:21:26.575 "read": true, 00:21:26.575 "write": true, 00:21:26.575 "unmap": true, 00:21:26.575 "flush": true, 00:21:26.575 "reset": true, 00:21:26.575 "nvme_admin": false, 00:21:26.575 "nvme_io": false, 00:21:26.575 "nvme_io_md": false, 00:21:26.575 "write_zeroes": true, 00:21:26.575 "zcopy": true, 00:21:26.575 "get_zone_info": false, 00:21:26.575 "zone_management": false, 00:21:26.575 "zone_append": false, 00:21:26.575 "compare": false, 00:21:26.575 "compare_and_write": false, 00:21:26.575 "abort": true, 00:21:26.575 "seek_hole": false, 00:21:26.575 "seek_data": false, 00:21:26.575 "copy": true, 00:21:26.575 "nvme_iov_md": false 00:21:26.575 }, 00:21:26.575 "memory_domains": [ 00:21:26.575 { 00:21:26.575 "dma_device_id": "system", 00:21:26.575 "dma_device_type": 1 00:21:26.575 }, 00:21:26.575 { 00:21:26.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.575 "dma_device_type": 2 00:21:26.575 } 00:21:26.575 ], 00:21:26.575 "driver_specific": {} 00:21:26.575 } 00:21:26.575 ] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.575 "name": "Existed_Raid", 00:21:26.575 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:26.575 "strip_size_kb": 0, 00:21:26.575 "state": "configuring", 00:21:26.575 "raid_level": "raid1", 00:21:26.575 "superblock": true, 00:21:26.575 "num_base_bdevs": 3, 00:21:26.575 "num_base_bdevs_discovered": 2, 00:21:26.575 "num_base_bdevs_operational": 3, 00:21:26.575 "base_bdevs_list": [ 00:21:26.575 { 00:21:26.575 "name": "BaseBdev1", 00:21:26.575 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:26.575 "is_configured": true, 00:21:26.575 "data_offset": 2048, 00:21:26.575 "data_size": 63488 00:21:26.575 }, 00:21:26.575 { 00:21:26.575 "name": null, 00:21:26.575 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:26.575 "is_configured": false, 00:21:26.575 "data_offset": 0, 00:21:26.575 "data_size": 63488 00:21:26.575 }, 00:21:26.575 { 00:21:26.575 "name": "BaseBdev3", 00:21:26.575 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:26.575 "is_configured": true, 00:21:26.575 "data_offset": 2048, 00:21:26.575 "data_size": 63488 00:21:26.575 } 00:21:26.575 ] 00:21:26.575 }' 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.575 07:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.833 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.833 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:26.833 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.833 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.092 [2024-10-07 07:42:26.437861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.092 "name": "Existed_Raid", 00:21:27.092 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:27.092 "strip_size_kb": 0, 00:21:27.092 "state": "configuring", 00:21:27.092 "raid_level": "raid1", 00:21:27.092 "superblock": true, 00:21:27.092 "num_base_bdevs": 3, 00:21:27.092 "num_base_bdevs_discovered": 1, 00:21:27.092 "num_base_bdevs_operational": 3, 00:21:27.092 "base_bdevs_list": [ 00:21:27.092 { 00:21:27.092 "name": "BaseBdev1", 00:21:27.092 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:27.092 "is_configured": true, 00:21:27.092 "data_offset": 2048, 00:21:27.092 "data_size": 63488 00:21:27.092 }, 00:21:27.092 { 00:21:27.092 "name": null, 00:21:27.092 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:27.092 "is_configured": false, 00:21:27.092 "data_offset": 0, 00:21:27.092 "data_size": 63488 00:21:27.092 }, 00:21:27.092 { 00:21:27.092 "name": null, 00:21:27.092 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:27.092 "is_configured": false, 00:21:27.092 "data_offset": 0, 00:21:27.092 "data_size": 63488 00:21:27.092 } 00:21:27.092 ] 00:21:27.092 }' 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.092 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.349 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.349 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.349 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.349 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:27.349 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.607 [2024-10-07 07:42:26.942034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.607 "name": "Existed_Raid", 00:21:27.607 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:27.607 "strip_size_kb": 0, 00:21:27.607 "state": "configuring", 00:21:27.607 "raid_level": "raid1", 00:21:27.607 "superblock": true, 00:21:27.607 "num_base_bdevs": 3, 00:21:27.607 "num_base_bdevs_discovered": 2, 00:21:27.607 "num_base_bdevs_operational": 3, 00:21:27.607 "base_bdevs_list": [ 00:21:27.607 { 00:21:27.607 "name": "BaseBdev1", 00:21:27.607 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:27.607 "is_configured": true, 00:21:27.607 "data_offset": 2048, 00:21:27.607 "data_size": 63488 00:21:27.607 }, 00:21:27.607 { 00:21:27.607 "name": null, 00:21:27.607 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:27.607 "is_configured": false, 00:21:27.607 "data_offset": 0, 00:21:27.607 "data_size": 63488 00:21:27.607 }, 00:21:27.607 { 00:21:27.607 "name": "BaseBdev3", 00:21:27.607 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:27.607 "is_configured": true, 00:21:27.607 "data_offset": 2048, 00:21:27.607 "data_size": 63488 00:21:27.607 } 00:21:27.607 ] 00:21:27.607 }' 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.607 07:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.865 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:27.865 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.865 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:27.865 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.123 [2024-10-07 07:42:27.466204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.123 "name": "Existed_Raid", 00:21:28.123 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:28.123 "strip_size_kb": 0, 00:21:28.123 "state": "configuring", 00:21:28.123 "raid_level": "raid1", 00:21:28.123 "superblock": true, 00:21:28.123 "num_base_bdevs": 3, 00:21:28.123 "num_base_bdevs_discovered": 1, 00:21:28.123 "num_base_bdevs_operational": 3, 00:21:28.123 "base_bdevs_list": [ 00:21:28.123 { 00:21:28.123 "name": null, 00:21:28.123 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:28.123 "is_configured": false, 00:21:28.123 "data_offset": 0, 00:21:28.123 "data_size": 63488 00:21:28.123 }, 00:21:28.123 { 00:21:28.123 "name": null, 00:21:28.123 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:28.123 "is_configured": false, 00:21:28.123 "data_offset": 0, 00:21:28.123 "data_size": 63488 00:21:28.123 }, 00:21:28.123 { 00:21:28.123 "name": "BaseBdev3", 00:21:28.123 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:28.123 "is_configured": true, 00:21:28.123 "data_offset": 2048, 00:21:28.123 "data_size": 63488 00:21:28.123 } 00:21:28.123 ] 00:21:28.123 }' 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.123 07:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 [2024-10-07 07:42:28.063991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.688 "name": "Existed_Raid", 00:21:28.688 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:28.688 "strip_size_kb": 0, 00:21:28.688 "state": "configuring", 00:21:28.688 "raid_level": "raid1", 00:21:28.688 "superblock": true, 00:21:28.688 "num_base_bdevs": 3, 00:21:28.688 "num_base_bdevs_discovered": 2, 00:21:28.688 "num_base_bdevs_operational": 3, 00:21:28.688 "base_bdevs_list": [ 00:21:28.688 { 00:21:28.688 "name": null, 00:21:28.688 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:28.688 "is_configured": false, 00:21:28.688 "data_offset": 0, 00:21:28.688 "data_size": 63488 00:21:28.688 }, 00:21:28.688 { 00:21:28.688 "name": "BaseBdev2", 00:21:28.688 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:28.688 "is_configured": true, 00:21:28.688 "data_offset": 2048, 00:21:28.688 "data_size": 63488 00:21:28.688 }, 00:21:28.688 { 00:21:28.688 "name": "BaseBdev3", 00:21:28.688 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:28.688 "is_configured": true, 00:21:28.688 "data_offset": 2048, 00:21:28.688 "data_size": 63488 00:21:28.688 } 00:21:28.688 ] 00:21:28.688 }' 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.688 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c02cae82-ab3b-4ac0-9b3d-f54611892011 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.259 [2024-10-07 07:42:28.667595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:29.259 NewBaseBdev 00:21:29.259 [2024-10-07 07:42:28.668113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:29.259 [2024-10-07 07:42:28.668139] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:29.259 [2024-10-07 07:42:28.668448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:29.259 [2024-10-07 07:42:28.668632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:29.259 [2024-10-07 07:42:28.668650] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:29.259 [2024-10-07 07:42:28.668820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.259 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.260 [ 00:21:29.260 { 00:21:29.260 "name": "NewBaseBdev", 00:21:29.260 "aliases": [ 00:21:29.260 "c02cae82-ab3b-4ac0-9b3d-f54611892011" 00:21:29.260 ], 00:21:29.260 "product_name": "Malloc disk", 00:21:29.260 "block_size": 512, 00:21:29.260 "num_blocks": 65536, 00:21:29.260 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:29.260 "assigned_rate_limits": { 00:21:29.260 "rw_ios_per_sec": 0, 00:21:29.260 "rw_mbytes_per_sec": 0, 00:21:29.260 "r_mbytes_per_sec": 0, 00:21:29.260 "w_mbytes_per_sec": 0 00:21:29.260 }, 00:21:29.260 "claimed": true, 00:21:29.260 "claim_type": "exclusive_write", 00:21:29.260 "zoned": false, 00:21:29.260 "supported_io_types": { 00:21:29.260 "read": true, 00:21:29.260 "write": true, 00:21:29.260 "unmap": true, 00:21:29.260 "flush": true, 00:21:29.260 "reset": true, 00:21:29.260 "nvme_admin": false, 00:21:29.260 "nvme_io": false, 00:21:29.260 "nvme_io_md": false, 00:21:29.260 "write_zeroes": true, 00:21:29.260 "zcopy": true, 00:21:29.260 "get_zone_info": false, 00:21:29.260 "zone_management": false, 00:21:29.260 "zone_append": false, 00:21:29.260 "compare": false, 00:21:29.260 "compare_and_write": false, 00:21:29.260 "abort": true, 00:21:29.260 "seek_hole": false, 00:21:29.260 "seek_data": false, 00:21:29.260 "copy": true, 00:21:29.260 "nvme_iov_md": false 00:21:29.260 }, 00:21:29.260 "memory_domains": [ 00:21:29.260 { 00:21:29.260 "dma_device_id": "system", 00:21:29.260 "dma_device_type": 1 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.260 "dma_device_type": 2 00:21:29.260 } 00:21:29.260 ], 00:21:29.260 "driver_specific": {} 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.260 "name": "Existed_Raid", 00:21:29.260 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:29.260 "strip_size_kb": 0, 00:21:29.260 "state": "online", 00:21:29.260 "raid_level": "raid1", 00:21:29.260 "superblock": true, 00:21:29.260 "num_base_bdevs": 3, 00:21:29.260 "num_base_bdevs_discovered": 3, 00:21:29.260 "num_base_bdevs_operational": 3, 00:21:29.260 "base_bdevs_list": [ 00:21:29.260 { 00:21:29.260 "name": "NewBaseBdev", 00:21:29.260 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:29.260 "is_configured": true, 00:21:29.260 "data_offset": 2048, 00:21:29.260 "data_size": 63488 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "name": "BaseBdev2", 00:21:29.260 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:29.260 "is_configured": true, 00:21:29.260 "data_offset": 2048, 00:21:29.260 "data_size": 63488 00:21:29.260 }, 00:21:29.260 { 00:21:29.260 "name": "BaseBdev3", 00:21:29.260 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:29.260 "is_configured": true, 00:21:29.260 "data_offset": 2048, 00:21:29.260 "data_size": 63488 00:21:29.260 } 00:21:29.260 ] 00:21:29.260 }' 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.260 07:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.827 [2024-10-07 07:42:29.220145] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:29.827 "name": "Existed_Raid", 00:21:29.827 "aliases": [ 00:21:29.827 "723a081a-368b-4321-b3ec-5e1b24d00fbc" 00:21:29.827 ], 00:21:29.827 "product_name": "Raid Volume", 00:21:29.827 "block_size": 512, 00:21:29.827 "num_blocks": 63488, 00:21:29.827 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:29.827 "assigned_rate_limits": { 00:21:29.827 "rw_ios_per_sec": 0, 00:21:29.827 "rw_mbytes_per_sec": 0, 00:21:29.827 "r_mbytes_per_sec": 0, 00:21:29.827 "w_mbytes_per_sec": 0 00:21:29.827 }, 00:21:29.827 "claimed": false, 00:21:29.827 "zoned": false, 00:21:29.827 "supported_io_types": { 00:21:29.827 "read": true, 00:21:29.827 "write": true, 00:21:29.827 "unmap": false, 00:21:29.827 "flush": false, 00:21:29.827 "reset": true, 00:21:29.827 "nvme_admin": false, 00:21:29.827 "nvme_io": false, 00:21:29.827 "nvme_io_md": false, 00:21:29.827 "write_zeroes": true, 00:21:29.827 "zcopy": false, 00:21:29.827 "get_zone_info": false, 00:21:29.827 "zone_management": false, 00:21:29.827 "zone_append": false, 00:21:29.827 "compare": false, 00:21:29.827 "compare_and_write": false, 00:21:29.827 "abort": false, 00:21:29.827 "seek_hole": false, 00:21:29.827 "seek_data": false, 00:21:29.827 "copy": false, 00:21:29.827 "nvme_iov_md": false 00:21:29.827 }, 00:21:29.827 "memory_domains": [ 00:21:29.827 { 00:21:29.827 "dma_device_id": "system", 00:21:29.827 "dma_device_type": 1 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.827 "dma_device_type": 2 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "dma_device_id": "system", 00:21:29.827 "dma_device_type": 1 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.827 "dma_device_type": 2 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "dma_device_id": "system", 00:21:29.827 "dma_device_type": 1 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.827 "dma_device_type": 2 00:21:29.827 } 00:21:29.827 ], 00:21:29.827 "driver_specific": { 00:21:29.827 "raid": { 00:21:29.827 "uuid": "723a081a-368b-4321-b3ec-5e1b24d00fbc", 00:21:29.827 "strip_size_kb": 0, 00:21:29.827 "state": "online", 00:21:29.827 "raid_level": "raid1", 00:21:29.827 "superblock": true, 00:21:29.827 "num_base_bdevs": 3, 00:21:29.827 "num_base_bdevs_discovered": 3, 00:21:29.827 "num_base_bdevs_operational": 3, 00:21:29.827 "base_bdevs_list": [ 00:21:29.827 { 00:21:29.827 "name": "NewBaseBdev", 00:21:29.827 "uuid": "c02cae82-ab3b-4ac0-9b3d-f54611892011", 00:21:29.827 "is_configured": true, 00:21:29.827 "data_offset": 2048, 00:21:29.827 "data_size": 63488 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "name": "BaseBdev2", 00:21:29.827 "uuid": "b9e6509b-9d7a-46ab-b7b7-c8149f734fb5", 00:21:29.827 "is_configured": true, 00:21:29.827 "data_offset": 2048, 00:21:29.827 "data_size": 63488 00:21:29.827 }, 00:21:29.827 { 00:21:29.827 "name": "BaseBdev3", 00:21:29.827 "uuid": "cd40bedd-3706-4198-bf06-412616c455f7", 00:21:29.827 "is_configured": true, 00:21:29.827 "data_offset": 2048, 00:21:29.827 "data_size": 63488 00:21:29.827 } 00:21:29.827 ] 00:21:29.827 } 00:21:29.827 } 00:21:29.827 }' 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:29.827 BaseBdev2 00:21:29.827 BaseBdev3' 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:29.827 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:29.828 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:29.828 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:29.828 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:29.828 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.828 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.085 [2024-10-07 07:42:29.511857] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:30.085 [2024-10-07 07:42:29.511899] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.085 [2024-10-07 07:42:29.511990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.085 [2024-10-07 07:42:29.512315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.085 [2024-10-07 07:42:29.512330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68128 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 68128 ']' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 68128 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 68128 00:21:30.085 killing process with pid 68128 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 68128' 00:21:30.085 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 68128 00:21:30.086 [2024-10-07 07:42:29.553575] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.086 07:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 68128 00:21:30.343 [2024-10-07 07:42:29.878692] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.720 07:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:31.720 00:21:31.720 real 0m10.967s 00:21:31.720 user 0m17.285s 00:21:31.720 sys 0m1.954s 00:21:31.720 ************************************ 00:21:31.720 END TEST raid_state_function_test_sb 00:21:31.720 ************************************ 00:21:31.720 07:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:31.720 07:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.979 07:42:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:31.979 07:42:31 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:21:31.979 07:42:31 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:31.979 07:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.979 ************************************ 00:21:31.979 START TEST raid_superblock_test 00:21:31.979 ************************************ 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 3 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:31.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68754 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68754 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 68754 ']' 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:31.979 07:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.979 [2024-10-07 07:42:31.414153] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:31.979 [2024-10-07 07:42:31.414307] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68754 ] 00:21:32.238 [2024-10-07 07:42:31.580122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.496 [2024-10-07 07:42:31.801306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.496 [2024-10-07 07:42:32.020279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.496 [2024-10-07 07:42:32.020345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 malloc1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 [2024-10-07 07:42:32.391353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:33.062 [2024-10-07 07:42:32.391578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.062 [2024-10-07 07:42:32.391650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:33.062 [2024-10-07 07:42:32.391777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.062 [2024-10-07 07:42:32.394578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.062 [2024-10-07 07:42:32.394748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:33.062 pt1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 malloc2 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 [2024-10-07 07:42:32.462526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:33.062 [2024-10-07 07:42:32.462721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.062 [2024-10-07 07:42:32.462784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:33.062 [2024-10-07 07:42:32.462864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.062 [2024-10-07 07:42:32.465451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.062 pt2 00:21:33.062 [2024-10-07 07:42:32.465601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 malloc3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 [2024-10-07 07:42:32.520209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:33.062 [2024-10-07 07:42:32.520383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.062 [2024-10-07 07:42:32.520446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:33.062 [2024-10-07 07:42:32.520566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.062 [2024-10-07 07:42:32.523198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.062 [2024-10-07 07:42:32.523339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:33.062 pt3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.062 [2024-10-07 07:42:32.528396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:33.062 [2024-10-07 07:42:32.530698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:33.062 [2024-10-07 07:42:32.530888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:33.062 [2024-10-07 07:42:32.531201] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:33.062 [2024-10-07 07:42:32.531306] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:33.062 [2024-10-07 07:42:32.531630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:33.062 [2024-10-07 07:42:32.531950] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:33.062 [2024-10-07 07:42:32.531968] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:33.062 [2024-10-07 07:42:32.532143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.062 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.063 "name": "raid_bdev1", 00:21:33.063 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:33.063 "strip_size_kb": 0, 00:21:33.063 "state": "online", 00:21:33.063 "raid_level": "raid1", 00:21:33.063 "superblock": true, 00:21:33.063 "num_base_bdevs": 3, 00:21:33.063 "num_base_bdevs_discovered": 3, 00:21:33.063 "num_base_bdevs_operational": 3, 00:21:33.063 "base_bdevs_list": [ 00:21:33.063 { 00:21:33.063 "name": "pt1", 00:21:33.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.063 "is_configured": true, 00:21:33.063 "data_offset": 2048, 00:21:33.063 "data_size": 63488 00:21:33.063 }, 00:21:33.063 { 00:21:33.063 "name": "pt2", 00:21:33.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.063 "is_configured": true, 00:21:33.063 "data_offset": 2048, 00:21:33.063 "data_size": 63488 00:21:33.063 }, 00:21:33.063 { 00:21:33.063 "name": "pt3", 00:21:33.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.063 "is_configured": true, 00:21:33.063 "data_offset": 2048, 00:21:33.063 "data_size": 63488 00:21:33.063 } 00:21:33.063 ] 00:21:33.063 }' 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.063 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.629 [2024-10-07 07:42:32.932840] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.629 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.629 "name": "raid_bdev1", 00:21:33.629 "aliases": [ 00:21:33.629 "a7d2b64c-623d-4056-b59c-43d9bda16767" 00:21:33.629 ], 00:21:33.629 "product_name": "Raid Volume", 00:21:33.629 "block_size": 512, 00:21:33.629 "num_blocks": 63488, 00:21:33.629 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:33.629 "assigned_rate_limits": { 00:21:33.629 "rw_ios_per_sec": 0, 00:21:33.629 "rw_mbytes_per_sec": 0, 00:21:33.629 "r_mbytes_per_sec": 0, 00:21:33.629 "w_mbytes_per_sec": 0 00:21:33.629 }, 00:21:33.629 "claimed": false, 00:21:33.629 "zoned": false, 00:21:33.629 "supported_io_types": { 00:21:33.629 "read": true, 00:21:33.629 "write": true, 00:21:33.630 "unmap": false, 00:21:33.630 "flush": false, 00:21:33.630 "reset": true, 00:21:33.630 "nvme_admin": false, 00:21:33.630 "nvme_io": false, 00:21:33.630 "nvme_io_md": false, 00:21:33.630 "write_zeroes": true, 00:21:33.630 "zcopy": false, 00:21:33.630 "get_zone_info": false, 00:21:33.630 "zone_management": false, 00:21:33.630 "zone_append": false, 00:21:33.630 "compare": false, 00:21:33.630 "compare_and_write": false, 00:21:33.630 "abort": false, 00:21:33.630 "seek_hole": false, 00:21:33.630 "seek_data": false, 00:21:33.630 "copy": false, 00:21:33.630 "nvme_iov_md": false 00:21:33.630 }, 00:21:33.630 "memory_domains": [ 00:21:33.630 { 00:21:33.630 "dma_device_id": "system", 00:21:33.630 "dma_device_type": 1 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.630 "dma_device_type": 2 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "dma_device_id": "system", 00:21:33.630 "dma_device_type": 1 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.630 "dma_device_type": 2 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "dma_device_id": "system", 00:21:33.630 "dma_device_type": 1 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.630 "dma_device_type": 2 00:21:33.630 } 00:21:33.630 ], 00:21:33.630 "driver_specific": { 00:21:33.630 "raid": { 00:21:33.630 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:33.630 "strip_size_kb": 0, 00:21:33.630 "state": "online", 00:21:33.630 "raid_level": "raid1", 00:21:33.630 "superblock": true, 00:21:33.630 "num_base_bdevs": 3, 00:21:33.630 "num_base_bdevs_discovered": 3, 00:21:33.630 "num_base_bdevs_operational": 3, 00:21:33.630 "base_bdevs_list": [ 00:21:33.630 { 00:21:33.630 "name": "pt1", 00:21:33.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.630 "is_configured": true, 00:21:33.630 "data_offset": 2048, 00:21:33.630 "data_size": 63488 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "name": "pt2", 00:21:33.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.630 "is_configured": true, 00:21:33.630 "data_offset": 2048, 00:21:33.630 "data_size": 63488 00:21:33.630 }, 00:21:33.630 { 00:21:33.630 "name": "pt3", 00:21:33.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.630 "is_configured": true, 00:21:33.630 "data_offset": 2048, 00:21:33.630 "data_size": 63488 00:21:33.630 } 00:21:33.630 ] 00:21:33.630 } 00:21:33.630 } 00:21:33.630 }' 00:21:33.630 07:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:33.630 pt2 00:21:33.630 pt3' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.630 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.630 [2024-10-07 07:42:33.184814] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7d2b64c-623d-4056-b59c-43d9bda16767 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7d2b64c-623d-4056-b59c-43d9bda16767 ']' 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 [2024-10-07 07:42:33.228512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:33.888 [2024-10-07 07:42:33.228551] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:33.888 [2024-10-07 07:42:33.228650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.888 [2024-10-07 07:42:33.228749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.888 [2024-10-07 07:42:33.228764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:33.888 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.889 [2024-10-07 07:42:33.348580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:33.889 [2024-10-07 07:42:33.350892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:33.889 [2024-10-07 07:42:33.350951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:33.889 [2024-10-07 07:42:33.351011] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:33.889 [2024-10-07 07:42:33.351072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:33.889 [2024-10-07 07:42:33.351096] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:33.889 [2024-10-07 07:42:33.351118] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:33.889 [2024-10-07 07:42:33.351130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:33.889 request: 00:21:33.889 { 00:21:33.889 "name": "raid_bdev1", 00:21:33.889 "raid_level": "raid1", 00:21:33.889 "base_bdevs": [ 00:21:33.889 "malloc1", 00:21:33.889 "malloc2", 00:21:33.889 "malloc3" 00:21:33.889 ], 00:21:33.889 "superblock": false, 00:21:33.889 "method": "bdev_raid_create", 00:21:33.889 "req_id": 1 00:21:33.889 } 00:21:33.889 Got JSON-RPC error response 00:21:33.889 response: 00:21:33.889 { 00:21:33.889 "code": -17, 00:21:33.889 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:33.889 } 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.889 [2024-10-07 07:42:33.412577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:33.889 [2024-10-07 07:42:33.412801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.889 [2024-10-07 07:42:33.412973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:33.889 [2024-10-07 07:42:33.413091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.889 [2024-10-07 07:42:33.415893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.889 [2024-10-07 07:42:33.416036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:33.889 [2024-10-07 07:42:33.416216] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:33.889 pt1 00:21:33.889 [2024-10-07 07:42:33.416359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.889 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.146 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.146 "name": "raid_bdev1", 00:21:34.146 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:34.146 "strip_size_kb": 0, 00:21:34.146 "state": "configuring", 00:21:34.146 "raid_level": "raid1", 00:21:34.146 "superblock": true, 00:21:34.146 "num_base_bdevs": 3, 00:21:34.146 "num_base_bdevs_discovered": 1, 00:21:34.146 "num_base_bdevs_operational": 3, 00:21:34.146 "base_bdevs_list": [ 00:21:34.146 { 00:21:34.146 "name": "pt1", 00:21:34.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.146 "is_configured": true, 00:21:34.146 "data_offset": 2048, 00:21:34.146 "data_size": 63488 00:21:34.146 }, 00:21:34.146 { 00:21:34.146 "name": null, 00:21:34.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.146 "is_configured": false, 00:21:34.146 "data_offset": 2048, 00:21:34.146 "data_size": 63488 00:21:34.146 }, 00:21:34.146 { 00:21:34.146 "name": null, 00:21:34.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:34.146 "is_configured": false, 00:21:34.146 "data_offset": 2048, 00:21:34.146 "data_size": 63488 00:21:34.146 } 00:21:34.146 ] 00:21:34.146 }' 00:21:34.146 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.146 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 [2024-10-07 07:42:33.864724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:34.405 [2024-10-07 07:42:33.864960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.405 [2024-10-07 07:42:33.864999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:34.405 [2024-10-07 07:42:33.865012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.405 [2024-10-07 07:42:33.865524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.405 [2024-10-07 07:42:33.865546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:34.405 [2024-10-07 07:42:33.865648] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:34.405 [2024-10-07 07:42:33.865685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:34.405 pt2 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 [2024-10-07 07:42:33.872755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.405 "name": "raid_bdev1", 00:21:34.405 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:34.405 "strip_size_kb": 0, 00:21:34.405 "state": "configuring", 00:21:34.405 "raid_level": "raid1", 00:21:34.405 "superblock": true, 00:21:34.405 "num_base_bdevs": 3, 00:21:34.405 "num_base_bdevs_discovered": 1, 00:21:34.405 "num_base_bdevs_operational": 3, 00:21:34.405 "base_bdevs_list": [ 00:21:34.405 { 00:21:34.405 "name": "pt1", 00:21:34.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.405 "is_configured": true, 00:21:34.405 "data_offset": 2048, 00:21:34.405 "data_size": 63488 00:21:34.405 }, 00:21:34.405 { 00:21:34.405 "name": null, 00:21:34.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.405 "is_configured": false, 00:21:34.405 "data_offset": 0, 00:21:34.405 "data_size": 63488 00:21:34.405 }, 00:21:34.405 { 00:21:34.405 "name": null, 00:21:34.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:34.405 "is_configured": false, 00:21:34.405 "data_offset": 2048, 00:21:34.405 "data_size": 63488 00:21:34.405 } 00:21:34.405 ] 00:21:34.405 }' 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.405 07:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.972 [2024-10-07 07:42:34.348833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:34.972 [2024-10-07 07:42:34.349052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.972 [2024-10-07 07:42:34.349164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:34.972 [2024-10-07 07:42:34.349255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.972 [2024-10-07 07:42:34.349778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.972 [2024-10-07 07:42:34.349810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:34.972 [2024-10-07 07:42:34.349919] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:34.972 [2024-10-07 07:42:34.349953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:34.972 pt2 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.972 [2024-10-07 07:42:34.356843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:34.972 [2024-10-07 07:42:34.357019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.972 [2024-10-07 07:42:34.357126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:34.972 [2024-10-07 07:42:34.357214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.972 [2024-10-07 07:42:34.357782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.972 [2024-10-07 07:42:34.357918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:34.972 [2024-10-07 07:42:34.358083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:34.972 [2024-10-07 07:42:34.358182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:34.972 [2024-10-07 07:42:34.358340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:34.972 [2024-10-07 07:42:34.358439] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:34.972 [2024-10-07 07:42:34.358748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:34.972 [2024-10-07 07:42:34.358993] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:34.972 [2024-10-07 07:42:34.359089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:34.972 [2024-10-07 07:42:34.359431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.972 pt3 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.972 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:34.973 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.973 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:34.973 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.973 "name": "raid_bdev1", 00:21:34.973 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:34.973 "strip_size_kb": 0, 00:21:34.973 "state": "online", 00:21:34.973 "raid_level": "raid1", 00:21:34.973 "superblock": true, 00:21:34.973 "num_base_bdevs": 3, 00:21:34.973 "num_base_bdevs_discovered": 3, 00:21:34.973 "num_base_bdevs_operational": 3, 00:21:34.973 "base_bdevs_list": [ 00:21:34.973 { 00:21:34.973 "name": "pt1", 00:21:34.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.973 "is_configured": true, 00:21:34.973 "data_offset": 2048, 00:21:34.973 "data_size": 63488 00:21:34.973 }, 00:21:34.973 { 00:21:34.973 "name": "pt2", 00:21:34.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.973 "is_configured": true, 00:21:34.973 "data_offset": 2048, 00:21:34.973 "data_size": 63488 00:21:34.973 }, 00:21:34.973 { 00:21:34.973 "name": "pt3", 00:21:34.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:34.973 "is_configured": true, 00:21:34.973 "data_offset": 2048, 00:21:34.973 "data_size": 63488 00:21:34.973 } 00:21:34.973 ] 00:21:34.973 }' 00:21:34.973 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.973 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 [2024-10-07 07:42:34.805275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.538 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.538 "name": "raid_bdev1", 00:21:35.538 "aliases": [ 00:21:35.538 "a7d2b64c-623d-4056-b59c-43d9bda16767" 00:21:35.538 ], 00:21:35.538 "product_name": "Raid Volume", 00:21:35.538 "block_size": 512, 00:21:35.538 "num_blocks": 63488, 00:21:35.538 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:35.538 "assigned_rate_limits": { 00:21:35.538 "rw_ios_per_sec": 0, 00:21:35.538 "rw_mbytes_per_sec": 0, 00:21:35.538 "r_mbytes_per_sec": 0, 00:21:35.538 "w_mbytes_per_sec": 0 00:21:35.538 }, 00:21:35.538 "claimed": false, 00:21:35.538 "zoned": false, 00:21:35.538 "supported_io_types": { 00:21:35.538 "read": true, 00:21:35.538 "write": true, 00:21:35.538 "unmap": false, 00:21:35.538 "flush": false, 00:21:35.538 "reset": true, 00:21:35.538 "nvme_admin": false, 00:21:35.538 "nvme_io": false, 00:21:35.538 "nvme_io_md": false, 00:21:35.538 "write_zeroes": true, 00:21:35.538 "zcopy": false, 00:21:35.538 "get_zone_info": false, 00:21:35.538 "zone_management": false, 00:21:35.538 "zone_append": false, 00:21:35.538 "compare": false, 00:21:35.538 "compare_and_write": false, 00:21:35.538 "abort": false, 00:21:35.538 "seek_hole": false, 00:21:35.538 "seek_data": false, 00:21:35.538 "copy": false, 00:21:35.538 "nvme_iov_md": false 00:21:35.538 }, 00:21:35.538 "memory_domains": [ 00:21:35.538 { 00:21:35.538 "dma_device_id": "system", 00:21:35.538 "dma_device_type": 1 00:21:35.538 }, 00:21:35.539 { 00:21:35.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.539 "dma_device_type": 2 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "dma_device_id": "system", 00:21:35.539 "dma_device_type": 1 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.539 "dma_device_type": 2 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "dma_device_id": "system", 00:21:35.539 "dma_device_type": 1 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.539 "dma_device_type": 2 00:21:35.539 } 00:21:35.539 ], 00:21:35.539 "driver_specific": { 00:21:35.539 "raid": { 00:21:35.539 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:35.539 "strip_size_kb": 0, 00:21:35.539 "state": "online", 00:21:35.539 "raid_level": "raid1", 00:21:35.539 "superblock": true, 00:21:35.539 "num_base_bdevs": 3, 00:21:35.539 "num_base_bdevs_discovered": 3, 00:21:35.539 "num_base_bdevs_operational": 3, 00:21:35.539 "base_bdevs_list": [ 00:21:35.539 { 00:21:35.539 "name": "pt1", 00:21:35.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.539 "is_configured": true, 00:21:35.539 "data_offset": 2048, 00:21:35.539 "data_size": 63488 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "name": "pt2", 00:21:35.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.539 "is_configured": true, 00:21:35.539 "data_offset": 2048, 00:21:35.539 "data_size": 63488 00:21:35.539 }, 00:21:35.539 { 00:21:35.539 "name": "pt3", 00:21:35.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:35.539 "is_configured": true, 00:21:35.539 "data_offset": 2048, 00:21:35.539 "data_size": 63488 00:21:35.539 } 00:21:35.539 ] 00:21:35.539 } 00:21:35.539 } 00:21:35.539 }' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:35.539 pt2 00:21:35.539 pt3' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.539 07:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 [2024-10-07 07:42:35.065334] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.539 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7d2b64c-623d-4056-b59c-43d9bda16767 '!=' a7d2b64c-623d-4056-b59c-43d9bda16767 ']' 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 [2024-10-07 07:42:35.109130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.798 "name": "raid_bdev1", 00:21:35.798 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:35.798 "strip_size_kb": 0, 00:21:35.798 "state": "online", 00:21:35.798 "raid_level": "raid1", 00:21:35.798 "superblock": true, 00:21:35.798 "num_base_bdevs": 3, 00:21:35.798 "num_base_bdevs_discovered": 2, 00:21:35.798 "num_base_bdevs_operational": 2, 00:21:35.798 "base_bdevs_list": [ 00:21:35.798 { 00:21:35.798 "name": null, 00:21:35.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.798 "is_configured": false, 00:21:35.798 "data_offset": 0, 00:21:35.798 "data_size": 63488 00:21:35.798 }, 00:21:35.798 { 00:21:35.798 "name": "pt2", 00:21:35.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.798 "is_configured": true, 00:21:35.798 "data_offset": 2048, 00:21:35.798 "data_size": 63488 00:21:35.798 }, 00:21:35.798 { 00:21:35.798 "name": "pt3", 00:21:35.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:35.798 "is_configured": true, 00:21:35.798 "data_offset": 2048, 00:21:35.798 "data_size": 63488 00:21:35.798 } 00:21:35.798 ] 00:21:35.798 }' 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.798 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.057 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.057 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.057 [2024-10-07 07:42:35.613195] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.057 [2024-10-07 07:42:35.613229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.057 [2024-10-07 07:42:35.613312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.057 [2024-10-07 07:42:35.613397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.057 [2024-10-07 07:42:35.613417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 [2024-10-07 07:42:35.689170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:36.315 [2024-10-07 07:42:35.689369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.315 [2024-10-07 07:42:35.689526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:36.315 [2024-10-07 07:42:35.689631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.315 [2024-10-07 07:42:35.692478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.315 [2024-10-07 07:42:35.692528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:36.315 [2024-10-07 07:42:35.692632] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:36.315 [2024-10-07 07:42:35.692710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:36.315 pt2 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.315 "name": "raid_bdev1", 00:21:36.315 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:36.315 "strip_size_kb": 0, 00:21:36.315 "state": "configuring", 00:21:36.315 "raid_level": "raid1", 00:21:36.315 "superblock": true, 00:21:36.315 "num_base_bdevs": 3, 00:21:36.315 "num_base_bdevs_discovered": 1, 00:21:36.315 "num_base_bdevs_operational": 2, 00:21:36.315 "base_bdevs_list": [ 00:21:36.315 { 00:21:36.315 "name": null, 00:21:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.315 "is_configured": false, 00:21:36.315 "data_offset": 2048, 00:21:36.315 "data_size": 63488 00:21:36.315 }, 00:21:36.315 { 00:21:36.315 "name": "pt2", 00:21:36.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.315 "is_configured": true, 00:21:36.315 "data_offset": 2048, 00:21:36.315 "data_size": 63488 00:21:36.315 }, 00:21:36.315 { 00:21:36.315 "name": null, 00:21:36.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:36.315 "is_configured": false, 00:21:36.315 "data_offset": 2048, 00:21:36.315 "data_size": 63488 00:21:36.315 } 00:21:36.315 ] 00:21:36.315 }' 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.315 07:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.882 [2024-10-07 07:42:36.165363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:36.882 [2024-10-07 07:42:36.165589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.882 [2024-10-07 07:42:36.165654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:36.882 [2024-10-07 07:42:36.165798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.882 [2024-10-07 07:42:36.166370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.882 [2024-10-07 07:42:36.166510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:36.882 [2024-10-07 07:42:36.166723] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:36.882 [2024-10-07 07:42:36.166768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:36.882 [2024-10-07 07:42:36.166910] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:36.882 [2024-10-07 07:42:36.166924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:36.882 [2024-10-07 07:42:36.167207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:36.882 [2024-10-07 07:42:36.167380] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:36.882 [2024-10-07 07:42:36.167391] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:36.882 [2024-10-07 07:42:36.167552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.882 pt3 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.882 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.883 "name": "raid_bdev1", 00:21:36.883 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:36.883 "strip_size_kb": 0, 00:21:36.883 "state": "online", 00:21:36.883 "raid_level": "raid1", 00:21:36.883 "superblock": true, 00:21:36.883 "num_base_bdevs": 3, 00:21:36.883 "num_base_bdevs_discovered": 2, 00:21:36.883 "num_base_bdevs_operational": 2, 00:21:36.883 "base_bdevs_list": [ 00:21:36.883 { 00:21:36.883 "name": null, 00:21:36.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.883 "is_configured": false, 00:21:36.883 "data_offset": 2048, 00:21:36.883 "data_size": 63488 00:21:36.883 }, 00:21:36.883 { 00:21:36.883 "name": "pt2", 00:21:36.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.883 "is_configured": true, 00:21:36.883 "data_offset": 2048, 00:21:36.883 "data_size": 63488 00:21:36.883 }, 00:21:36.883 { 00:21:36.883 "name": "pt3", 00:21:36.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:36.883 "is_configured": true, 00:21:36.883 "data_offset": 2048, 00:21:36.883 "data_size": 63488 00:21:36.883 } 00:21:36.883 ] 00:21:36.883 }' 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.883 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.141 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.141 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.141 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.141 [2024-10-07 07:42:36.625447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.141 [2024-10-07 07:42:36.625636] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.141 [2024-10-07 07:42:36.625753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.141 [2024-10-07 07:42:36.625829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.141 [2024-10-07 07:42:36.625843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.142 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.142 [2024-10-07 07:42:36.697507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.142 [2024-10-07 07:42:36.697754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.142 [2024-10-07 07:42:36.697881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:37.142 [2024-10-07 07:42:36.697973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.142 [2024-10-07 07:42:36.700947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.400 [2024-10-07 07:42:36.701132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.400 [2024-10-07 07:42:36.701269] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.400 [2024-10-07 07:42:36.701329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.400 [2024-10-07 07:42:36.701505] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:37.400 [2024-10-07 07:42:36.701522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.400 [2024-10-07 07:42:36.701548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:37.401 [2024-10-07 07:42:36.701615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.401 pt1 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.401 "name": "raid_bdev1", 00:21:37.401 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:37.401 "strip_size_kb": 0, 00:21:37.401 "state": "configuring", 00:21:37.401 "raid_level": "raid1", 00:21:37.401 "superblock": true, 00:21:37.401 "num_base_bdevs": 3, 00:21:37.401 "num_base_bdevs_discovered": 1, 00:21:37.401 "num_base_bdevs_operational": 2, 00:21:37.401 "base_bdevs_list": [ 00:21:37.401 { 00:21:37.401 "name": null, 00:21:37.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.401 "is_configured": false, 00:21:37.401 "data_offset": 2048, 00:21:37.401 "data_size": 63488 00:21:37.401 }, 00:21:37.401 { 00:21:37.401 "name": "pt2", 00:21:37.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.401 "is_configured": true, 00:21:37.401 "data_offset": 2048, 00:21:37.401 "data_size": 63488 00:21:37.401 }, 00:21:37.401 { 00:21:37.401 "name": null, 00:21:37.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.401 "is_configured": false, 00:21:37.401 "data_offset": 2048, 00:21:37.401 "data_size": 63488 00:21:37.401 } 00:21:37.401 ] 00:21:37.401 }' 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.401 07:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 [2024-10-07 07:42:37.201825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:37.659 [2024-10-07 07:42:37.203203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.659 [2024-10-07 07:42:37.203254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:37.659 [2024-10-07 07:42:37.203269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.659 [2024-10-07 07:42:37.203858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.659 [2024-10-07 07:42:37.203885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:37.659 [2024-10-07 07:42:37.203989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:37.659 [2024-10-07 07:42:37.204042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:37.659 [2024-10-07 07:42:37.204205] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:37.659 [2024-10-07 07:42:37.204217] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:37.659 [2024-10-07 07:42:37.204553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:37.659 [2024-10-07 07:42:37.204747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:37.659 [2024-10-07 07:42:37.204767] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:37.659 [2024-10-07 07:42:37.204919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.659 pt3 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.918 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:37.918 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.918 "name": "raid_bdev1", 00:21:37.918 "uuid": "a7d2b64c-623d-4056-b59c-43d9bda16767", 00:21:37.918 "strip_size_kb": 0, 00:21:37.918 "state": "online", 00:21:37.918 "raid_level": "raid1", 00:21:37.918 "superblock": true, 00:21:37.918 "num_base_bdevs": 3, 00:21:37.918 "num_base_bdevs_discovered": 2, 00:21:37.918 "num_base_bdevs_operational": 2, 00:21:37.918 "base_bdevs_list": [ 00:21:37.918 { 00:21:37.918 "name": null, 00:21:37.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.918 "is_configured": false, 00:21:37.918 "data_offset": 2048, 00:21:37.918 "data_size": 63488 00:21:37.918 }, 00:21:37.918 { 00:21:37.918 "name": "pt2", 00:21:37.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.918 "is_configured": true, 00:21:37.918 "data_offset": 2048, 00:21:37.918 "data_size": 63488 00:21:37.918 }, 00:21:37.918 { 00:21:37.918 "name": "pt3", 00:21:37.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.918 "is_configured": true, 00:21:37.918 "data_offset": 2048, 00:21:37.918 "data_size": 63488 00:21:37.918 } 00:21:37.918 ] 00:21:37.918 }' 00:21:37.918 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.918 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.177 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:38.177 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:38.177 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:38.177 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.177 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:38.435 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:38.435 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.436 [2024-10-07 07:42:37.758281] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a7d2b64c-623d-4056-b59c-43d9bda16767 '!=' a7d2b64c-623d-4056-b59c-43d9bda16767 ']' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68754 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 68754 ']' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 68754 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 68754 00:21:38.436 killing process with pid 68754 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 68754' 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 68754 00:21:38.436 [2024-10-07 07:42:37.860794] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:38.436 [2024-10-07 07:42:37.860902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:38.436 [2024-10-07 07:42:37.860974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:38.436 [2024-10-07 07:42:37.860991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:38.436 07:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 68754 00:21:38.694 [2024-10-07 07:42:38.234403] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:40.597 07:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:40.597 00:21:40.597 real 0m8.442s 00:21:40.597 user 0m13.114s 00:21:40.597 sys 0m1.446s 00:21:40.597 07:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:40.597 ************************************ 00:21:40.597 END TEST raid_superblock_test 00:21:40.597 ************************************ 00:21:40.597 07:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.598 07:42:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:40.598 07:42:39 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:40.598 07:42:39 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:40.598 07:42:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:40.598 ************************************ 00:21:40.598 START TEST raid_read_error_test 00:21:40.598 ************************************ 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 3 read 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MWVRTMDrS3 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69205 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69205 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 69205 ']' 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:40.598 07:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.598 [2024-10-07 07:42:39.940713] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:40.598 [2024-10-07 07:42:39.940876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69205 ] 00:21:40.598 [2024-10-07 07:42:40.123223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.857 [2024-10-07 07:42:40.377017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.116 [2024-10-07 07:42:40.626346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.116 [2024-10-07 07:42:40.626587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 BaseBdev1_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 true 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 [2024-10-07 07:42:41.126899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:41.685 [2024-10-07 07:42:41.126962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.685 [2024-10-07 07:42:41.126985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:41.685 [2024-10-07 07:42:41.127002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.685 [2024-10-07 07:42:41.129844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.685 [2024-10-07 07:42:41.130054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:41.685 BaseBdev1 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 BaseBdev2_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 true 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 [2024-10-07 07:42:41.210677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:41.685 [2024-10-07 07:42:41.210877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.685 [2024-10-07 07:42:41.210960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:41.685 [2024-10-07 07:42:41.211055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.685 [2024-10-07 07:42:41.214040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.685 [2024-10-07 07:42:41.214216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:41.685 BaseBdev2 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.685 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 BaseBdev3_malloc 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 true 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 [2024-10-07 07:42:41.278992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:41.945 [2024-10-07 07:42:41.279170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.945 [2024-10-07 07:42:41.279202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:41.945 [2024-10-07 07:42:41.279219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.945 [2024-10-07 07:42:41.281965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.945 [2024-10-07 07:42:41.282013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:41.945 BaseBdev3 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 [2024-10-07 07:42:41.287059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.945 [2024-10-07 07:42:41.289460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.945 [2024-10-07 07:42:41.289545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.945 [2024-10-07 07:42:41.289800] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:41.945 [2024-10-07 07:42:41.289815] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:41.945 [2024-10-07 07:42:41.290128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:41.945 [2024-10-07 07:42:41.290344] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:41.945 [2024-10-07 07:42:41.290363] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:41.945 [2024-10-07 07:42:41.290586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:41.945 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.945 "name": "raid_bdev1", 00:21:41.946 "uuid": "0ddeed95-450e-45fc-a58b-5befb4df2668", 00:21:41.946 "strip_size_kb": 0, 00:21:41.946 "state": "online", 00:21:41.946 "raid_level": "raid1", 00:21:41.946 "superblock": true, 00:21:41.946 "num_base_bdevs": 3, 00:21:41.946 "num_base_bdevs_discovered": 3, 00:21:41.946 "num_base_bdevs_operational": 3, 00:21:41.946 "base_bdevs_list": [ 00:21:41.946 { 00:21:41.946 "name": "BaseBdev1", 00:21:41.946 "uuid": "08204a6d-cd4d-528b-9bdb-b1805c36d5f1", 00:21:41.946 "is_configured": true, 00:21:41.946 "data_offset": 2048, 00:21:41.946 "data_size": 63488 00:21:41.946 }, 00:21:41.946 { 00:21:41.946 "name": "BaseBdev2", 00:21:41.946 "uuid": "3bff861c-3e42-5066-86d3-1f276ecbdbd6", 00:21:41.946 "is_configured": true, 00:21:41.946 "data_offset": 2048, 00:21:41.946 "data_size": 63488 00:21:41.946 }, 00:21:41.946 { 00:21:41.946 "name": "BaseBdev3", 00:21:41.946 "uuid": "794a906f-fc4e-59b3-a68b-df40878926c7", 00:21:41.946 "is_configured": true, 00:21:41.946 "data_offset": 2048, 00:21:41.946 "data_size": 63488 00:21:41.946 } 00:21:41.946 ] 00:21:41.946 }' 00:21:41.946 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.946 07:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.205 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:42.205 07:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:42.465 [2024-10-07 07:42:41.876962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.409 "name": "raid_bdev1", 00:21:43.409 "uuid": "0ddeed95-450e-45fc-a58b-5befb4df2668", 00:21:43.409 "strip_size_kb": 0, 00:21:43.409 "state": "online", 00:21:43.409 "raid_level": "raid1", 00:21:43.409 "superblock": true, 00:21:43.409 "num_base_bdevs": 3, 00:21:43.409 "num_base_bdevs_discovered": 3, 00:21:43.409 "num_base_bdevs_operational": 3, 00:21:43.409 "base_bdevs_list": [ 00:21:43.409 { 00:21:43.409 "name": "BaseBdev1", 00:21:43.409 "uuid": "08204a6d-cd4d-528b-9bdb-b1805c36d5f1", 00:21:43.409 "is_configured": true, 00:21:43.409 "data_offset": 2048, 00:21:43.409 "data_size": 63488 00:21:43.409 }, 00:21:43.409 { 00:21:43.409 "name": "BaseBdev2", 00:21:43.409 "uuid": "3bff861c-3e42-5066-86d3-1f276ecbdbd6", 00:21:43.409 "is_configured": true, 00:21:43.409 "data_offset": 2048, 00:21:43.409 "data_size": 63488 00:21:43.409 }, 00:21:43.409 { 00:21:43.409 "name": "BaseBdev3", 00:21:43.409 "uuid": "794a906f-fc4e-59b3-a68b-df40878926c7", 00:21:43.409 "is_configured": true, 00:21:43.409 "data_offset": 2048, 00:21:43.409 "data_size": 63488 00:21:43.409 } 00:21:43.409 ] 00:21:43.409 }' 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.409 07:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.669 [2024-10-07 07:42:43.187330] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.669 [2024-10-07 07:42:43.187387] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.669 [2024-10-07 07:42:43.190606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.669 [2024-10-07 07:42:43.190672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.669 [2024-10-07 07:42:43.191091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.669 [2024-10-07 07:42:43.191328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69205 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 69205 ']' 00:21:43.669 { 00:21:43.669 "results": [ 00:21:43.669 { 00:21:43.669 "job": "raid_bdev1", 00:21:43.669 "core_mask": "0x1", 00:21:43.669 "workload": "randrw", 00:21:43.669 "percentage": 50, 00:21:43.669 "status": "finished", 00:21:43.669 "queue_depth": 1, 00:21:43.669 "io_size": 131072, 00:21:43.669 "runtime": 1.307693, 00:21:43.669 "iops": 10713.523739899196, 00:21:43.669 "mibps": 1339.1904674873995, 00:21:43.669 "io_failed": 0, 00:21:43.669 "io_timeout": 0, 00:21:43.669 "avg_latency_us": 90.19441405798578, 00:21:43.669 "min_latency_us": 26.087619047619047, 00:21:43.669 "max_latency_us": 1732.0228571428572 00:21:43.669 } 00:21:43.669 ], 00:21:43.669 "core_count": 1 00:21:43.669 } 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 69205 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:43.669 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 69205 00:21:43.928 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:43.928 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:43.928 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 69205' 00:21:43.928 killing process with pid 69205 00:21:43.928 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 69205 00:21:43.928 07:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 69205 00:21:43.928 [2024-10-07 07:42:43.238051] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.185 [2024-10-07 07:42:43.531174] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MWVRTMDrS3 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:45.562 00:21:45.562 real 0m5.303s 00:21:45.562 user 0m6.389s 00:21:45.562 sys 0m0.653s 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:45.562 07:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.562 ************************************ 00:21:45.562 END TEST raid_read_error_test 00:21:45.562 ************************************ 00:21:45.822 07:42:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:45.822 07:42:45 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:45.822 07:42:45 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:45.822 07:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.822 ************************************ 00:21:45.822 START TEST raid_write_error_test 00:21:45.822 ************************************ 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 3 write 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.f349frjd8M 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69363 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69363 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 69363 ']' 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:45.822 07:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.822 [2024-10-07 07:42:45.313668] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:45.822 [2024-10-07 07:42:45.313867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69363 ] 00:21:46.080 [2024-10-07 07:42:45.501509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.337 [2024-10-07 07:42:45.828363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.596 [2024-10-07 07:42:46.097643] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.596 [2024-10-07 07:42:46.097703] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.854 BaseBdev1_malloc 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:46.854 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 true 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 [2024-10-07 07:42:46.419548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:47.113 [2024-10-07 07:42:46.419655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.113 [2024-10-07 07:42:46.419683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:47.113 [2024-10-07 07:42:46.419701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.113 [2024-10-07 07:42:46.423456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.113 [2024-10-07 07:42:46.423523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.113 BaseBdev1 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 BaseBdev2_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 true 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 [2024-10-07 07:42:46.508920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:47.113 [2024-10-07 07:42:46.509012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.113 [2024-10-07 07:42:46.509040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:47.113 [2024-10-07 07:42:46.509058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.113 [2024-10-07 07:42:46.512215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.113 [2024-10-07 07:42:46.512267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:47.113 BaseBdev2 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 BaseBdev3_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.113 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.113 true 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.114 [2024-10-07 07:42:46.579779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:47.114 [2024-10-07 07:42:46.579860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.114 [2024-10-07 07:42:46.579888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:47.114 [2024-10-07 07:42:46.579904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.114 [2024-10-07 07:42:46.583039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.114 [2024-10-07 07:42:46.583097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:47.114 BaseBdev3 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.114 [2024-10-07 07:42:46.588068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.114 [2024-10-07 07:42:46.590778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.114 [2024-10-07 07:42:46.590868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.114 [2024-10-07 07:42:46.591117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:47.114 [2024-10-07 07:42:46.591135] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:47.114 [2024-10-07 07:42:46.591487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:47.114 [2024-10-07 07:42:46.591735] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:47.114 [2024-10-07 07:42:46.591762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:47.114 [2024-10-07 07:42:46.592021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.114 "name": "raid_bdev1", 00:21:47.114 "uuid": "f0d93900-95d6-4b0f-8440-c85d4d1206c0", 00:21:47.114 "strip_size_kb": 0, 00:21:47.114 "state": "online", 00:21:47.114 "raid_level": "raid1", 00:21:47.114 "superblock": true, 00:21:47.114 "num_base_bdevs": 3, 00:21:47.114 "num_base_bdevs_discovered": 3, 00:21:47.114 "num_base_bdevs_operational": 3, 00:21:47.114 "base_bdevs_list": [ 00:21:47.114 { 00:21:47.114 "name": "BaseBdev1", 00:21:47.114 "uuid": "3fe62900-b0b9-5e48-af94-de57a7eb25f8", 00:21:47.114 "is_configured": true, 00:21:47.114 "data_offset": 2048, 00:21:47.114 "data_size": 63488 00:21:47.114 }, 00:21:47.114 { 00:21:47.114 "name": "BaseBdev2", 00:21:47.114 "uuid": "70f67ff1-0d16-5d01-a369-fdcfd04c73d2", 00:21:47.114 "is_configured": true, 00:21:47.114 "data_offset": 2048, 00:21:47.114 "data_size": 63488 00:21:47.114 }, 00:21:47.114 { 00:21:47.114 "name": "BaseBdev3", 00:21:47.114 "uuid": "ee4f2f43-a296-5cbf-9bf9-fe89bda984b2", 00:21:47.114 "is_configured": true, 00:21:47.114 "data_offset": 2048, 00:21:47.114 "data_size": 63488 00:21:47.114 } 00:21:47.114 ] 00:21:47.114 }' 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.114 07:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.680 07:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:47.680 07:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:47.680 [2024-10-07 07:42:47.173912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.618 [2024-10-07 07:42:48.050345] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:48.618 [2024-10-07 07:42:48.050405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.618 [2024-10-07 07:42:48.050620] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.618 "name": "raid_bdev1", 00:21:48.618 "uuid": "f0d93900-95d6-4b0f-8440-c85d4d1206c0", 00:21:48.618 "strip_size_kb": 0, 00:21:48.618 "state": "online", 00:21:48.618 "raid_level": "raid1", 00:21:48.618 "superblock": true, 00:21:48.618 "num_base_bdevs": 3, 00:21:48.618 "num_base_bdevs_discovered": 2, 00:21:48.618 "num_base_bdevs_operational": 2, 00:21:48.618 "base_bdevs_list": [ 00:21:48.618 { 00:21:48.618 "name": null, 00:21:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.618 "is_configured": false, 00:21:48.618 "data_offset": 0, 00:21:48.618 "data_size": 63488 00:21:48.618 }, 00:21:48.618 { 00:21:48.618 "name": "BaseBdev2", 00:21:48.618 "uuid": "70f67ff1-0d16-5d01-a369-fdcfd04c73d2", 00:21:48.618 "is_configured": true, 00:21:48.618 "data_offset": 2048, 00:21:48.618 "data_size": 63488 00:21:48.618 }, 00:21:48.618 { 00:21:48.618 "name": "BaseBdev3", 00:21:48.618 "uuid": "ee4f2f43-a296-5cbf-9bf9-fe89bda984b2", 00:21:48.618 "is_configured": true, 00:21:48.618 "data_offset": 2048, 00:21:48.618 "data_size": 63488 00:21:48.618 } 00:21:48.618 ] 00:21:48.618 }' 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.618 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.185 [2024-10-07 07:42:48.481931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.185 [2024-10-07 07:42:48.481978] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.185 [2024-10-07 07:42:48.484835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.185 [2024-10-07 07:42:48.484891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.185 [2024-10-07 07:42:48.484975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.185 [2024-10-07 07:42:48.484991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:49.185 { 00:21:49.185 "results": [ 00:21:49.185 { 00:21:49.185 "job": "raid_bdev1", 00:21:49.185 "core_mask": "0x1", 00:21:49.185 "workload": "randrw", 00:21:49.185 "percentage": 50, 00:21:49.185 "status": "finished", 00:21:49.185 "queue_depth": 1, 00:21:49.185 "io_size": 131072, 00:21:49.185 "runtime": 1.305716, 00:21:49.185 "iops": 13800.857154235684, 00:21:49.185 "mibps": 1725.1071442794605, 00:21:49.185 "io_failed": 0, 00:21:49.185 "io_timeout": 0, 00:21:49.185 "avg_latency_us": 69.48774715924105, 00:21:49.185 "min_latency_us": 24.624761904761904, 00:21:49.185 "max_latency_us": 1560.3809523809523 00:21:49.185 } 00:21:49.185 ], 00:21:49.185 "core_count": 1 00:21:49.185 } 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69363 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 69363 ']' 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 69363 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 69363 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:21:49.185 killing process with pid 69363 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 69363' 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 69363 00:21:49.185 07:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 69363 00:21:49.185 [2024-10-07 07:42:48.521296] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.444 [2024-10-07 07:42:48.776694] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.f349frjd8M 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:50.819 00:21:50.819 real 0m5.019s 00:21:50.819 user 0m5.875s 00:21:50.819 sys 0m0.792s 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:21:50.819 07:42:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.819 ************************************ 00:21:50.819 END TEST raid_write_error_test 00:21:50.819 ************************************ 00:21:50.819 07:42:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:21:50.819 07:42:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:50.819 07:42:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:50.819 07:42:50 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:21:50.819 07:42:50 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:21:50.819 07:42:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.819 ************************************ 00:21:50.819 START TEST raid_state_function_test 00:21:50.819 ************************************ 00:21:50.819 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 4 false 00:21:50.819 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:50.819 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69509 00:21:50.820 Process raid pid: 69509 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69509' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69509 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 69509 ']' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:21:50.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:21:50.820 07:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.079 [2024-10-07 07:42:50.386490] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:21:51.079 [2024-10-07 07:42:50.386669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.079 [2024-10-07 07:42:50.575822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.337 [2024-10-07 07:42:50.890178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.596 [2024-10-07 07:42:51.119146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.596 [2024-10-07 07:42:51.119202] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.855 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:21:51.855 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.856 [2024-10-07 07:42:51.383808] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:51.856 [2024-10-07 07:42:51.383889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:51.856 [2024-10-07 07:42:51.383904] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:51.856 [2024-10-07 07:42:51.383923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:51.856 [2024-10-07 07:42:51.383935] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:51.856 [2024-10-07 07:42:51.383951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:51.856 [2024-10-07 07:42:51.383961] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:51.856 [2024-10-07 07:42:51.383977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.856 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.132 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.132 "name": "Existed_Raid", 00:21:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.132 "strip_size_kb": 64, 00:21:52.132 "state": "configuring", 00:21:52.132 "raid_level": "raid0", 00:21:52.132 "superblock": false, 00:21:52.132 "num_base_bdevs": 4, 00:21:52.132 "num_base_bdevs_discovered": 0, 00:21:52.132 "num_base_bdevs_operational": 4, 00:21:52.132 "base_bdevs_list": [ 00:21:52.132 { 00:21:52.132 "name": "BaseBdev1", 00:21:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.132 "is_configured": false, 00:21:52.132 "data_offset": 0, 00:21:52.132 "data_size": 0 00:21:52.132 }, 00:21:52.132 { 00:21:52.132 "name": "BaseBdev2", 00:21:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.132 "is_configured": false, 00:21:52.132 "data_offset": 0, 00:21:52.132 "data_size": 0 00:21:52.132 }, 00:21:52.132 { 00:21:52.132 "name": "BaseBdev3", 00:21:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.132 "is_configured": false, 00:21:52.132 "data_offset": 0, 00:21:52.132 "data_size": 0 00:21:52.132 }, 00:21:52.132 { 00:21:52.132 "name": "BaseBdev4", 00:21:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.132 "is_configured": false, 00:21:52.132 "data_offset": 0, 00:21:52.132 "data_size": 0 00:21:52.132 } 00:21:52.132 ] 00:21:52.132 }' 00:21:52.132 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.132 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.449 [2024-10-07 07:42:51.819821] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.449 [2024-10-07 07:42:51.819875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.449 [2024-10-07 07:42:51.827818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:52.449 [2024-10-07 07:42:51.827868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:52.449 [2024-10-07 07:42:51.827880] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:52.449 [2024-10-07 07:42:51.827915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:52.449 [2024-10-07 07:42:51.827925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:52.449 [2024-10-07 07:42:51.827941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:52.449 [2024-10-07 07:42:51.827951] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:52.449 [2024-10-07 07:42:51.827967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.449 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.450 [2024-10-07 07:42:51.891451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:52.450 BaseBdev1 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.450 [ 00:21:52.450 { 00:21:52.450 "name": "BaseBdev1", 00:21:52.450 "aliases": [ 00:21:52.450 "0403ab86-e5a4-4356-baaa-7b2aae181498" 00:21:52.450 ], 00:21:52.450 "product_name": "Malloc disk", 00:21:52.450 "block_size": 512, 00:21:52.450 "num_blocks": 65536, 00:21:52.450 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:52.450 "assigned_rate_limits": { 00:21:52.450 "rw_ios_per_sec": 0, 00:21:52.450 "rw_mbytes_per_sec": 0, 00:21:52.450 "r_mbytes_per_sec": 0, 00:21:52.450 "w_mbytes_per_sec": 0 00:21:52.450 }, 00:21:52.450 "claimed": true, 00:21:52.450 "claim_type": "exclusive_write", 00:21:52.450 "zoned": false, 00:21:52.450 "supported_io_types": { 00:21:52.450 "read": true, 00:21:52.450 "write": true, 00:21:52.450 "unmap": true, 00:21:52.450 "flush": true, 00:21:52.450 "reset": true, 00:21:52.450 "nvme_admin": false, 00:21:52.450 "nvme_io": false, 00:21:52.450 "nvme_io_md": false, 00:21:52.450 "write_zeroes": true, 00:21:52.450 "zcopy": true, 00:21:52.450 "get_zone_info": false, 00:21:52.450 "zone_management": false, 00:21:52.450 "zone_append": false, 00:21:52.450 "compare": false, 00:21:52.450 "compare_and_write": false, 00:21:52.450 "abort": true, 00:21:52.450 "seek_hole": false, 00:21:52.450 "seek_data": false, 00:21:52.450 "copy": true, 00:21:52.450 "nvme_iov_md": false 00:21:52.450 }, 00:21:52.450 "memory_domains": [ 00:21:52.450 { 00:21:52.450 "dma_device_id": "system", 00:21:52.450 "dma_device_type": 1 00:21:52.450 }, 00:21:52.450 { 00:21:52.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.450 "dma_device_type": 2 00:21:52.450 } 00:21:52.450 ], 00:21:52.450 "driver_specific": {} 00:21:52.450 } 00:21:52.450 ] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.450 "name": "Existed_Raid", 00:21:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.450 "strip_size_kb": 64, 00:21:52.450 "state": "configuring", 00:21:52.450 "raid_level": "raid0", 00:21:52.450 "superblock": false, 00:21:52.450 "num_base_bdevs": 4, 00:21:52.450 "num_base_bdevs_discovered": 1, 00:21:52.450 "num_base_bdevs_operational": 4, 00:21:52.450 "base_bdevs_list": [ 00:21:52.450 { 00:21:52.450 "name": "BaseBdev1", 00:21:52.450 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:52.450 "is_configured": true, 00:21:52.450 "data_offset": 0, 00:21:52.450 "data_size": 65536 00:21:52.450 }, 00:21:52.450 { 00:21:52.450 "name": "BaseBdev2", 00:21:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.450 "is_configured": false, 00:21:52.450 "data_offset": 0, 00:21:52.450 "data_size": 0 00:21:52.450 }, 00:21:52.450 { 00:21:52.450 "name": "BaseBdev3", 00:21:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.450 "is_configured": false, 00:21:52.450 "data_offset": 0, 00:21:52.450 "data_size": 0 00:21:52.450 }, 00:21:52.450 { 00:21:52.450 "name": "BaseBdev4", 00:21:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.450 "is_configured": false, 00:21:52.450 "data_offset": 0, 00:21:52.450 "data_size": 0 00:21:52.450 } 00:21:52.450 ] 00:21:52.450 }' 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.450 07:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.049 [2024-10-07 07:42:52.379626] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.049 [2024-10-07 07:42:52.379696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.049 [2024-10-07 07:42:52.387676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.049 [2024-10-07 07:42:52.390070] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.049 [2024-10-07 07:42:52.390133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.049 [2024-10-07 07:42:52.390147] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.049 [2024-10-07 07:42:52.390166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.049 [2024-10-07 07:42:52.390178] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:53.049 [2024-10-07 07:42:52.390194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:53.049 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.050 "name": "Existed_Raid", 00:21:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.050 "strip_size_kb": 64, 00:21:53.050 "state": "configuring", 00:21:53.050 "raid_level": "raid0", 00:21:53.050 "superblock": false, 00:21:53.050 "num_base_bdevs": 4, 00:21:53.050 "num_base_bdevs_discovered": 1, 00:21:53.050 "num_base_bdevs_operational": 4, 00:21:53.050 "base_bdevs_list": [ 00:21:53.050 { 00:21:53.050 "name": "BaseBdev1", 00:21:53.050 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:53.050 "is_configured": true, 00:21:53.050 "data_offset": 0, 00:21:53.050 "data_size": 65536 00:21:53.050 }, 00:21:53.050 { 00:21:53.050 "name": "BaseBdev2", 00:21:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.050 "is_configured": false, 00:21:53.050 "data_offset": 0, 00:21:53.050 "data_size": 0 00:21:53.050 }, 00:21:53.050 { 00:21:53.050 "name": "BaseBdev3", 00:21:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.050 "is_configured": false, 00:21:53.050 "data_offset": 0, 00:21:53.050 "data_size": 0 00:21:53.050 }, 00:21:53.050 { 00:21:53.050 "name": "BaseBdev4", 00:21:53.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.050 "is_configured": false, 00:21:53.050 "data_offset": 0, 00:21:53.050 "data_size": 0 00:21:53.050 } 00:21:53.050 ] 00:21:53.050 }' 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.050 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.308 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:53.308 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.308 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.565 [2024-10-07 07:42:52.883890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:53.565 BaseBdev2 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:53.565 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.566 [ 00:21:53.566 { 00:21:53.566 "name": "BaseBdev2", 00:21:53.566 "aliases": [ 00:21:53.566 "616ade79-e619-46db-a399-bc82d0393ab4" 00:21:53.566 ], 00:21:53.566 "product_name": "Malloc disk", 00:21:53.566 "block_size": 512, 00:21:53.566 "num_blocks": 65536, 00:21:53.566 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:53.566 "assigned_rate_limits": { 00:21:53.566 "rw_ios_per_sec": 0, 00:21:53.566 "rw_mbytes_per_sec": 0, 00:21:53.566 "r_mbytes_per_sec": 0, 00:21:53.566 "w_mbytes_per_sec": 0 00:21:53.566 }, 00:21:53.566 "claimed": true, 00:21:53.566 "claim_type": "exclusive_write", 00:21:53.566 "zoned": false, 00:21:53.566 "supported_io_types": { 00:21:53.566 "read": true, 00:21:53.566 "write": true, 00:21:53.566 "unmap": true, 00:21:53.566 "flush": true, 00:21:53.566 "reset": true, 00:21:53.566 "nvme_admin": false, 00:21:53.566 "nvme_io": false, 00:21:53.566 "nvme_io_md": false, 00:21:53.566 "write_zeroes": true, 00:21:53.566 "zcopy": true, 00:21:53.566 "get_zone_info": false, 00:21:53.566 "zone_management": false, 00:21:53.566 "zone_append": false, 00:21:53.566 "compare": false, 00:21:53.566 "compare_and_write": false, 00:21:53.566 "abort": true, 00:21:53.566 "seek_hole": false, 00:21:53.566 "seek_data": false, 00:21:53.566 "copy": true, 00:21:53.566 "nvme_iov_md": false 00:21:53.566 }, 00:21:53.566 "memory_domains": [ 00:21:53.566 { 00:21:53.566 "dma_device_id": "system", 00:21:53.566 "dma_device_type": 1 00:21:53.566 }, 00:21:53.566 { 00:21:53.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.566 "dma_device_type": 2 00:21:53.566 } 00:21:53.566 ], 00:21:53.566 "driver_specific": {} 00:21:53.566 } 00:21:53.566 ] 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.566 "name": "Existed_Raid", 00:21:53.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.566 "strip_size_kb": 64, 00:21:53.566 "state": "configuring", 00:21:53.566 "raid_level": "raid0", 00:21:53.566 "superblock": false, 00:21:53.566 "num_base_bdevs": 4, 00:21:53.566 "num_base_bdevs_discovered": 2, 00:21:53.566 "num_base_bdevs_operational": 4, 00:21:53.566 "base_bdevs_list": [ 00:21:53.566 { 00:21:53.566 "name": "BaseBdev1", 00:21:53.566 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:53.566 "is_configured": true, 00:21:53.566 "data_offset": 0, 00:21:53.566 "data_size": 65536 00:21:53.566 }, 00:21:53.566 { 00:21:53.566 "name": "BaseBdev2", 00:21:53.566 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:53.566 "is_configured": true, 00:21:53.566 "data_offset": 0, 00:21:53.566 "data_size": 65536 00:21:53.566 }, 00:21:53.566 { 00:21:53.566 "name": "BaseBdev3", 00:21:53.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.566 "is_configured": false, 00:21:53.566 "data_offset": 0, 00:21:53.566 "data_size": 0 00:21:53.566 }, 00:21:53.566 { 00:21:53.566 "name": "BaseBdev4", 00:21:53.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.566 "is_configured": false, 00:21:53.566 "data_offset": 0, 00:21:53.566 "data_size": 0 00:21:53.566 } 00:21:53.566 ] 00:21:53.566 }' 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.566 07:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.825 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:53.825 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:53.825 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.083 [2024-10-07 07:42:53.416577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.083 BaseBdev3 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.083 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.083 [ 00:21:54.083 { 00:21:54.083 "name": "BaseBdev3", 00:21:54.083 "aliases": [ 00:21:54.083 "2ad45371-6feb-4565-ab97-a8401df34e9b" 00:21:54.083 ], 00:21:54.083 "product_name": "Malloc disk", 00:21:54.083 "block_size": 512, 00:21:54.083 "num_blocks": 65536, 00:21:54.083 "uuid": "2ad45371-6feb-4565-ab97-a8401df34e9b", 00:21:54.083 "assigned_rate_limits": { 00:21:54.083 "rw_ios_per_sec": 0, 00:21:54.083 "rw_mbytes_per_sec": 0, 00:21:54.083 "r_mbytes_per_sec": 0, 00:21:54.083 "w_mbytes_per_sec": 0 00:21:54.083 }, 00:21:54.083 "claimed": true, 00:21:54.083 "claim_type": "exclusive_write", 00:21:54.083 "zoned": false, 00:21:54.083 "supported_io_types": { 00:21:54.083 "read": true, 00:21:54.083 "write": true, 00:21:54.083 "unmap": true, 00:21:54.083 "flush": true, 00:21:54.083 "reset": true, 00:21:54.083 "nvme_admin": false, 00:21:54.083 "nvme_io": false, 00:21:54.083 "nvme_io_md": false, 00:21:54.083 "write_zeroes": true, 00:21:54.083 "zcopy": true, 00:21:54.083 "get_zone_info": false, 00:21:54.083 "zone_management": false, 00:21:54.083 "zone_append": false, 00:21:54.083 "compare": false, 00:21:54.083 "compare_and_write": false, 00:21:54.083 "abort": true, 00:21:54.083 "seek_hole": false, 00:21:54.083 "seek_data": false, 00:21:54.083 "copy": true, 00:21:54.083 "nvme_iov_md": false 00:21:54.083 }, 00:21:54.083 "memory_domains": [ 00:21:54.083 { 00:21:54.083 "dma_device_id": "system", 00:21:54.083 "dma_device_type": 1 00:21:54.083 }, 00:21:54.083 { 00:21:54.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.084 "dma_device_type": 2 00:21:54.084 } 00:21:54.084 ], 00:21:54.084 "driver_specific": {} 00:21:54.084 } 00:21:54.084 ] 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.084 "name": "Existed_Raid", 00:21:54.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.084 "strip_size_kb": 64, 00:21:54.084 "state": "configuring", 00:21:54.084 "raid_level": "raid0", 00:21:54.084 "superblock": false, 00:21:54.084 "num_base_bdevs": 4, 00:21:54.084 "num_base_bdevs_discovered": 3, 00:21:54.084 "num_base_bdevs_operational": 4, 00:21:54.084 "base_bdevs_list": [ 00:21:54.084 { 00:21:54.084 "name": "BaseBdev1", 00:21:54.084 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:54.084 "is_configured": true, 00:21:54.084 "data_offset": 0, 00:21:54.084 "data_size": 65536 00:21:54.084 }, 00:21:54.084 { 00:21:54.084 "name": "BaseBdev2", 00:21:54.084 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:54.084 "is_configured": true, 00:21:54.084 "data_offset": 0, 00:21:54.084 "data_size": 65536 00:21:54.084 }, 00:21:54.084 { 00:21:54.084 "name": "BaseBdev3", 00:21:54.084 "uuid": "2ad45371-6feb-4565-ab97-a8401df34e9b", 00:21:54.084 "is_configured": true, 00:21:54.084 "data_offset": 0, 00:21:54.084 "data_size": 65536 00:21:54.084 }, 00:21:54.084 { 00:21:54.084 "name": "BaseBdev4", 00:21:54.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.084 "is_configured": false, 00:21:54.084 "data_offset": 0, 00:21:54.084 "data_size": 0 00:21:54.084 } 00:21:54.084 ] 00:21:54.084 }' 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.084 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.651 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.652 [2024-10-07 07:42:53.978269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:54.652 [2024-10-07 07:42:53.978338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:54.652 [2024-10-07 07:42:53.978355] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:54.652 [2024-10-07 07:42:53.978780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:54.652 [2024-10-07 07:42:53.979023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:54.652 [2024-10-07 07:42:53.979064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:54.652 [2024-10-07 07:42:53.979420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.652 BaseBdev4 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.652 07:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.652 [ 00:21:54.652 { 00:21:54.652 "name": "BaseBdev4", 00:21:54.652 "aliases": [ 00:21:54.652 "61a50314-313f-4103-ae86-2fe7e8cf1bc0" 00:21:54.652 ], 00:21:54.652 "product_name": "Malloc disk", 00:21:54.652 "block_size": 512, 00:21:54.652 "num_blocks": 65536, 00:21:54.652 "uuid": "61a50314-313f-4103-ae86-2fe7e8cf1bc0", 00:21:54.652 "assigned_rate_limits": { 00:21:54.652 "rw_ios_per_sec": 0, 00:21:54.652 "rw_mbytes_per_sec": 0, 00:21:54.652 "r_mbytes_per_sec": 0, 00:21:54.652 "w_mbytes_per_sec": 0 00:21:54.652 }, 00:21:54.652 "claimed": true, 00:21:54.652 "claim_type": "exclusive_write", 00:21:54.652 "zoned": false, 00:21:54.652 "supported_io_types": { 00:21:54.652 "read": true, 00:21:54.652 "write": true, 00:21:54.652 "unmap": true, 00:21:54.652 "flush": true, 00:21:54.652 "reset": true, 00:21:54.652 "nvme_admin": false, 00:21:54.652 "nvme_io": false, 00:21:54.652 "nvme_io_md": false, 00:21:54.652 "write_zeroes": true, 00:21:54.652 "zcopy": true, 00:21:54.652 "get_zone_info": false, 00:21:54.652 "zone_management": false, 00:21:54.652 "zone_append": false, 00:21:54.652 "compare": false, 00:21:54.652 "compare_and_write": false, 00:21:54.652 "abort": true, 00:21:54.652 "seek_hole": false, 00:21:54.652 "seek_data": false, 00:21:54.652 "copy": true, 00:21:54.652 "nvme_iov_md": false 00:21:54.652 }, 00:21:54.652 "memory_domains": [ 00:21:54.652 { 00:21:54.652 "dma_device_id": "system", 00:21:54.652 "dma_device_type": 1 00:21:54.652 }, 00:21:54.652 { 00:21:54.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.652 "dma_device_type": 2 00:21:54.652 } 00:21:54.652 ], 00:21:54.652 "driver_specific": {} 00:21:54.652 } 00:21:54.652 ] 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.652 "name": "Existed_Raid", 00:21:54.652 "uuid": "6ba5dc03-1187-43f6-8bb2-80b59aa77069", 00:21:54.652 "strip_size_kb": 64, 00:21:54.652 "state": "online", 00:21:54.652 "raid_level": "raid0", 00:21:54.652 "superblock": false, 00:21:54.652 "num_base_bdevs": 4, 00:21:54.652 "num_base_bdevs_discovered": 4, 00:21:54.652 "num_base_bdevs_operational": 4, 00:21:54.652 "base_bdevs_list": [ 00:21:54.652 { 00:21:54.652 "name": "BaseBdev1", 00:21:54.652 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:54.652 "is_configured": true, 00:21:54.652 "data_offset": 0, 00:21:54.652 "data_size": 65536 00:21:54.652 }, 00:21:54.652 { 00:21:54.652 "name": "BaseBdev2", 00:21:54.652 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:54.652 "is_configured": true, 00:21:54.652 "data_offset": 0, 00:21:54.652 "data_size": 65536 00:21:54.652 }, 00:21:54.652 { 00:21:54.652 "name": "BaseBdev3", 00:21:54.652 "uuid": "2ad45371-6feb-4565-ab97-a8401df34e9b", 00:21:54.652 "is_configured": true, 00:21:54.652 "data_offset": 0, 00:21:54.652 "data_size": 65536 00:21:54.652 }, 00:21:54.652 { 00:21:54.652 "name": "BaseBdev4", 00:21:54.652 "uuid": "61a50314-313f-4103-ae86-2fe7e8cf1bc0", 00:21:54.652 "is_configured": true, 00:21:54.652 "data_offset": 0, 00:21:54.652 "data_size": 65536 00:21:54.652 } 00:21:54.652 ] 00:21:54.652 }' 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.652 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:54.911 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.911 [2024-10-07 07:42:54.466851] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.169 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.169 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.169 "name": "Existed_Raid", 00:21:55.169 "aliases": [ 00:21:55.169 "6ba5dc03-1187-43f6-8bb2-80b59aa77069" 00:21:55.169 ], 00:21:55.169 "product_name": "Raid Volume", 00:21:55.169 "block_size": 512, 00:21:55.169 "num_blocks": 262144, 00:21:55.169 "uuid": "6ba5dc03-1187-43f6-8bb2-80b59aa77069", 00:21:55.169 "assigned_rate_limits": { 00:21:55.169 "rw_ios_per_sec": 0, 00:21:55.169 "rw_mbytes_per_sec": 0, 00:21:55.169 "r_mbytes_per_sec": 0, 00:21:55.169 "w_mbytes_per_sec": 0 00:21:55.169 }, 00:21:55.169 "claimed": false, 00:21:55.169 "zoned": false, 00:21:55.169 "supported_io_types": { 00:21:55.169 "read": true, 00:21:55.169 "write": true, 00:21:55.169 "unmap": true, 00:21:55.169 "flush": true, 00:21:55.169 "reset": true, 00:21:55.169 "nvme_admin": false, 00:21:55.169 "nvme_io": false, 00:21:55.169 "nvme_io_md": false, 00:21:55.169 "write_zeroes": true, 00:21:55.169 "zcopy": false, 00:21:55.169 "get_zone_info": false, 00:21:55.169 "zone_management": false, 00:21:55.169 "zone_append": false, 00:21:55.169 "compare": false, 00:21:55.169 "compare_and_write": false, 00:21:55.169 "abort": false, 00:21:55.169 "seek_hole": false, 00:21:55.169 "seek_data": false, 00:21:55.169 "copy": false, 00:21:55.169 "nvme_iov_md": false 00:21:55.169 }, 00:21:55.169 "memory_domains": [ 00:21:55.169 { 00:21:55.169 "dma_device_id": "system", 00:21:55.169 "dma_device_type": 1 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.169 "dma_device_type": 2 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "system", 00:21:55.169 "dma_device_type": 1 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.169 "dma_device_type": 2 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "system", 00:21:55.169 "dma_device_type": 1 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.169 "dma_device_type": 2 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "system", 00:21:55.169 "dma_device_type": 1 00:21:55.169 }, 00:21:55.169 { 00:21:55.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.169 "dma_device_type": 2 00:21:55.169 } 00:21:55.169 ], 00:21:55.169 "driver_specific": { 00:21:55.169 "raid": { 00:21:55.169 "uuid": "6ba5dc03-1187-43f6-8bb2-80b59aa77069", 00:21:55.169 "strip_size_kb": 64, 00:21:55.170 "state": "online", 00:21:55.170 "raid_level": "raid0", 00:21:55.170 "superblock": false, 00:21:55.170 "num_base_bdevs": 4, 00:21:55.170 "num_base_bdevs_discovered": 4, 00:21:55.170 "num_base_bdevs_operational": 4, 00:21:55.170 "base_bdevs_list": [ 00:21:55.170 { 00:21:55.170 "name": "BaseBdev1", 00:21:55.170 "uuid": "0403ab86-e5a4-4356-baaa-7b2aae181498", 00:21:55.170 "is_configured": true, 00:21:55.170 "data_offset": 0, 00:21:55.170 "data_size": 65536 00:21:55.170 }, 00:21:55.170 { 00:21:55.170 "name": "BaseBdev2", 00:21:55.170 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:55.170 "is_configured": true, 00:21:55.170 "data_offset": 0, 00:21:55.170 "data_size": 65536 00:21:55.170 }, 00:21:55.170 { 00:21:55.170 "name": "BaseBdev3", 00:21:55.170 "uuid": "2ad45371-6feb-4565-ab97-a8401df34e9b", 00:21:55.170 "is_configured": true, 00:21:55.170 "data_offset": 0, 00:21:55.170 "data_size": 65536 00:21:55.170 }, 00:21:55.170 { 00:21:55.170 "name": "BaseBdev4", 00:21:55.170 "uuid": "61a50314-313f-4103-ae86-2fe7e8cf1bc0", 00:21:55.170 "is_configured": true, 00:21:55.170 "data_offset": 0, 00:21:55.170 "data_size": 65536 00:21:55.170 } 00:21:55.170 ] 00:21:55.170 } 00:21:55.170 } 00:21:55.170 }' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:55.170 BaseBdev2 00:21:55.170 BaseBdev3 00:21:55.170 BaseBdev4' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.170 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.428 [2024-10-07 07:42:54.798651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:55.428 [2024-10-07 07:42:54.798841] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.428 [2024-10-07 07:42:54.799025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.428 "name": "Existed_Raid", 00:21:55.428 "uuid": "6ba5dc03-1187-43f6-8bb2-80b59aa77069", 00:21:55.428 "strip_size_kb": 64, 00:21:55.428 "state": "offline", 00:21:55.428 "raid_level": "raid0", 00:21:55.428 "superblock": false, 00:21:55.428 "num_base_bdevs": 4, 00:21:55.428 "num_base_bdevs_discovered": 3, 00:21:55.428 "num_base_bdevs_operational": 3, 00:21:55.428 "base_bdevs_list": [ 00:21:55.428 { 00:21:55.428 "name": null, 00:21:55.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.428 "is_configured": false, 00:21:55.428 "data_offset": 0, 00:21:55.428 "data_size": 65536 00:21:55.428 }, 00:21:55.428 { 00:21:55.428 "name": "BaseBdev2", 00:21:55.428 "uuid": "616ade79-e619-46db-a399-bc82d0393ab4", 00:21:55.428 "is_configured": true, 00:21:55.428 "data_offset": 0, 00:21:55.428 "data_size": 65536 00:21:55.428 }, 00:21:55.428 { 00:21:55.428 "name": "BaseBdev3", 00:21:55.428 "uuid": "2ad45371-6feb-4565-ab97-a8401df34e9b", 00:21:55.428 "is_configured": true, 00:21:55.428 "data_offset": 0, 00:21:55.428 "data_size": 65536 00:21:55.428 }, 00:21:55.428 { 00:21:55.428 "name": "BaseBdev4", 00:21:55.428 "uuid": "61a50314-313f-4103-ae86-2fe7e8cf1bc0", 00:21:55.428 "is_configured": true, 00:21:55.428 "data_offset": 0, 00:21:55.428 "data_size": 65536 00:21:55.428 } 00:21:55.428 ] 00:21:55.428 }' 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.428 07:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.993 [2024-10-07 07:42:55.412456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.993 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:55.994 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.994 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:55.994 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.252 [2024-10-07 07:42:55.581041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.252 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.252 [2024-10-07 07:42:55.750470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:56.252 [2024-10-07 07:42:55.750672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.510 BaseBdev2 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.510 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.511 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:56.511 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.511 07:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 [ 00:21:56.511 { 00:21:56.511 "name": "BaseBdev2", 00:21:56.511 "aliases": [ 00:21:56.511 "6e283a35-45f0-4799-b304-11c9fd1951ec" 00:21:56.511 ], 00:21:56.511 "product_name": "Malloc disk", 00:21:56.511 "block_size": 512, 00:21:56.511 "num_blocks": 65536, 00:21:56.511 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:56.511 "assigned_rate_limits": { 00:21:56.511 "rw_ios_per_sec": 0, 00:21:56.511 "rw_mbytes_per_sec": 0, 00:21:56.511 "r_mbytes_per_sec": 0, 00:21:56.511 "w_mbytes_per_sec": 0 00:21:56.511 }, 00:21:56.511 "claimed": false, 00:21:56.511 "zoned": false, 00:21:56.511 "supported_io_types": { 00:21:56.511 "read": true, 00:21:56.511 "write": true, 00:21:56.511 "unmap": true, 00:21:56.511 "flush": true, 00:21:56.511 "reset": true, 00:21:56.511 "nvme_admin": false, 00:21:56.511 "nvme_io": false, 00:21:56.511 "nvme_io_md": false, 00:21:56.511 "write_zeroes": true, 00:21:56.511 "zcopy": true, 00:21:56.511 "get_zone_info": false, 00:21:56.511 "zone_management": false, 00:21:56.511 "zone_append": false, 00:21:56.511 "compare": false, 00:21:56.511 "compare_and_write": false, 00:21:56.511 "abort": true, 00:21:56.511 "seek_hole": false, 00:21:56.511 "seek_data": false, 00:21:56.511 "copy": true, 00:21:56.511 "nvme_iov_md": false 00:21:56.511 }, 00:21:56.511 "memory_domains": [ 00:21:56.511 { 00:21:56.511 "dma_device_id": "system", 00:21:56.511 "dma_device_type": 1 00:21:56.511 }, 00:21:56.511 { 00:21:56.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.511 "dma_device_type": 2 00:21:56.511 } 00:21:56.511 ], 00:21:56.511 "driver_specific": {} 00:21:56.511 } 00:21:56.511 ] 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 BaseBdev3 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.511 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.773 [ 00:21:56.773 { 00:21:56.773 "name": "BaseBdev3", 00:21:56.773 "aliases": [ 00:21:56.773 "23f3d741-a229-4f03-a2a1-cf23262f6b8d" 00:21:56.773 ], 00:21:56.773 "product_name": "Malloc disk", 00:21:56.773 "block_size": 512, 00:21:56.773 "num_blocks": 65536, 00:21:56.774 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:56.774 "assigned_rate_limits": { 00:21:56.774 "rw_ios_per_sec": 0, 00:21:56.774 "rw_mbytes_per_sec": 0, 00:21:56.774 "r_mbytes_per_sec": 0, 00:21:56.774 "w_mbytes_per_sec": 0 00:21:56.774 }, 00:21:56.774 "claimed": false, 00:21:56.774 "zoned": false, 00:21:56.774 "supported_io_types": { 00:21:56.774 "read": true, 00:21:56.774 "write": true, 00:21:56.774 "unmap": true, 00:21:56.774 "flush": true, 00:21:56.774 "reset": true, 00:21:56.774 "nvme_admin": false, 00:21:56.774 "nvme_io": false, 00:21:56.774 "nvme_io_md": false, 00:21:56.774 "write_zeroes": true, 00:21:56.774 "zcopy": true, 00:21:56.774 "get_zone_info": false, 00:21:56.774 "zone_management": false, 00:21:56.774 "zone_append": false, 00:21:56.774 "compare": false, 00:21:56.774 "compare_and_write": false, 00:21:56.774 "abort": true, 00:21:56.774 "seek_hole": false, 00:21:56.774 "seek_data": false, 00:21:56.774 "copy": true, 00:21:56.774 "nvme_iov_md": false 00:21:56.774 }, 00:21:56.774 "memory_domains": [ 00:21:56.774 { 00:21:56.774 "dma_device_id": "system", 00:21:56.774 "dma_device_type": 1 00:21:56.774 }, 00:21:56.774 { 00:21:56.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.774 "dma_device_type": 2 00:21:56.774 } 00:21:56.774 ], 00:21:56.774 "driver_specific": {} 00:21:56.774 } 00:21:56.774 ] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.774 BaseBdev4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.774 [ 00:21:56.774 { 00:21:56.774 "name": "BaseBdev4", 00:21:56.774 "aliases": [ 00:21:56.774 "ee32fb4d-586d-4cb6-b348-848c27809ed0" 00:21:56.774 ], 00:21:56.774 "product_name": "Malloc disk", 00:21:56.774 "block_size": 512, 00:21:56.774 "num_blocks": 65536, 00:21:56.774 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:56.774 "assigned_rate_limits": { 00:21:56.774 "rw_ios_per_sec": 0, 00:21:56.774 "rw_mbytes_per_sec": 0, 00:21:56.774 "r_mbytes_per_sec": 0, 00:21:56.774 "w_mbytes_per_sec": 0 00:21:56.774 }, 00:21:56.774 "claimed": false, 00:21:56.774 "zoned": false, 00:21:56.774 "supported_io_types": { 00:21:56.774 "read": true, 00:21:56.774 "write": true, 00:21:56.774 "unmap": true, 00:21:56.774 "flush": true, 00:21:56.774 "reset": true, 00:21:56.774 "nvme_admin": false, 00:21:56.774 "nvme_io": false, 00:21:56.774 "nvme_io_md": false, 00:21:56.774 "write_zeroes": true, 00:21:56.774 "zcopy": true, 00:21:56.774 "get_zone_info": false, 00:21:56.774 "zone_management": false, 00:21:56.774 "zone_append": false, 00:21:56.774 "compare": false, 00:21:56.774 "compare_and_write": false, 00:21:56.774 "abort": true, 00:21:56.774 "seek_hole": false, 00:21:56.774 "seek_data": false, 00:21:56.774 "copy": true, 00:21:56.774 "nvme_iov_md": false 00:21:56.774 }, 00:21:56.774 "memory_domains": [ 00:21:56.774 { 00:21:56.774 "dma_device_id": "system", 00:21:56.774 "dma_device_type": 1 00:21:56.774 }, 00:21:56.774 { 00:21:56.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.774 "dma_device_type": 2 00:21:56.774 } 00:21:56.774 ], 00:21:56.774 "driver_specific": {} 00:21:56.774 } 00:21:56.774 ] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.774 [2024-10-07 07:42:56.181520] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:56.774 [2024-10-07 07:42:56.181754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:56.774 [2024-10-07 07:42:56.181890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.774 [2024-10-07 07:42:56.184350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.774 [2024-10-07 07:42:56.184572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.774 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.775 "name": "Existed_Raid", 00:21:56.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.775 "strip_size_kb": 64, 00:21:56.775 "state": "configuring", 00:21:56.775 "raid_level": "raid0", 00:21:56.775 "superblock": false, 00:21:56.775 "num_base_bdevs": 4, 00:21:56.775 "num_base_bdevs_discovered": 3, 00:21:56.775 "num_base_bdevs_operational": 4, 00:21:56.775 "base_bdevs_list": [ 00:21:56.775 { 00:21:56.775 "name": "BaseBdev1", 00:21:56.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.775 "is_configured": false, 00:21:56.775 "data_offset": 0, 00:21:56.775 "data_size": 0 00:21:56.775 }, 00:21:56.775 { 00:21:56.775 "name": "BaseBdev2", 00:21:56.775 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:56.775 "is_configured": true, 00:21:56.775 "data_offset": 0, 00:21:56.775 "data_size": 65536 00:21:56.775 }, 00:21:56.775 { 00:21:56.775 "name": "BaseBdev3", 00:21:56.775 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:56.775 "is_configured": true, 00:21:56.775 "data_offset": 0, 00:21:56.775 "data_size": 65536 00:21:56.775 }, 00:21:56.775 { 00:21:56.775 "name": "BaseBdev4", 00:21:56.775 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:56.775 "is_configured": true, 00:21:56.775 "data_offset": 0, 00:21:56.775 "data_size": 65536 00:21:56.775 } 00:21:56.775 ] 00:21:56.775 }' 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.775 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 [2024-10-07 07:42:56.653623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.342 "name": "Existed_Raid", 00:21:57.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.342 "strip_size_kb": 64, 00:21:57.342 "state": "configuring", 00:21:57.342 "raid_level": "raid0", 00:21:57.342 "superblock": false, 00:21:57.342 "num_base_bdevs": 4, 00:21:57.342 "num_base_bdevs_discovered": 2, 00:21:57.342 "num_base_bdevs_operational": 4, 00:21:57.342 "base_bdevs_list": [ 00:21:57.342 { 00:21:57.342 "name": "BaseBdev1", 00:21:57.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.342 "is_configured": false, 00:21:57.342 "data_offset": 0, 00:21:57.342 "data_size": 0 00:21:57.342 }, 00:21:57.342 { 00:21:57.342 "name": null, 00:21:57.342 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:57.342 "is_configured": false, 00:21:57.342 "data_offset": 0, 00:21:57.342 "data_size": 65536 00:21:57.342 }, 00:21:57.342 { 00:21:57.342 "name": "BaseBdev3", 00:21:57.342 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:57.342 "is_configured": true, 00:21:57.342 "data_offset": 0, 00:21:57.342 "data_size": 65536 00:21:57.342 }, 00:21:57.342 { 00:21:57.342 "name": "BaseBdev4", 00:21:57.342 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:57.342 "is_configured": true, 00:21:57.342 "data_offset": 0, 00:21:57.342 "data_size": 65536 00:21:57.342 } 00:21:57.342 ] 00:21:57.342 }' 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.342 07:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.600 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.600 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:57.600 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.600 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.600 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.858 [2024-10-07 07:42:57.205482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.858 BaseBdev1 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.858 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.858 [ 00:21:57.858 { 00:21:57.858 "name": "BaseBdev1", 00:21:57.858 "aliases": [ 00:21:57.858 "9a752de6-cf0f-4ff9-b873-26f85b96bca8" 00:21:57.858 ], 00:21:57.858 "product_name": "Malloc disk", 00:21:57.858 "block_size": 512, 00:21:57.858 "num_blocks": 65536, 00:21:57.858 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:21:57.858 "assigned_rate_limits": { 00:21:57.858 "rw_ios_per_sec": 0, 00:21:57.858 "rw_mbytes_per_sec": 0, 00:21:57.858 "r_mbytes_per_sec": 0, 00:21:57.858 "w_mbytes_per_sec": 0 00:21:57.858 }, 00:21:57.858 "claimed": true, 00:21:57.858 "claim_type": "exclusive_write", 00:21:57.858 "zoned": false, 00:21:57.858 "supported_io_types": { 00:21:57.858 "read": true, 00:21:57.858 "write": true, 00:21:57.858 "unmap": true, 00:21:57.858 "flush": true, 00:21:57.858 "reset": true, 00:21:57.858 "nvme_admin": false, 00:21:57.858 "nvme_io": false, 00:21:57.858 "nvme_io_md": false, 00:21:57.858 "write_zeroes": true, 00:21:57.859 "zcopy": true, 00:21:57.859 "get_zone_info": false, 00:21:57.859 "zone_management": false, 00:21:57.859 "zone_append": false, 00:21:57.859 "compare": false, 00:21:57.859 "compare_and_write": false, 00:21:57.859 "abort": true, 00:21:57.859 "seek_hole": false, 00:21:57.859 "seek_data": false, 00:21:57.859 "copy": true, 00:21:57.859 "nvme_iov_md": false 00:21:57.859 }, 00:21:57.859 "memory_domains": [ 00:21:57.859 { 00:21:57.859 "dma_device_id": "system", 00:21:57.859 "dma_device_type": 1 00:21:57.859 }, 00:21:57.859 { 00:21:57.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.859 "dma_device_type": 2 00:21:57.859 } 00:21:57.859 ], 00:21:57.859 "driver_specific": {} 00:21:57.859 } 00:21:57.859 ] 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.859 "name": "Existed_Raid", 00:21:57.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.859 "strip_size_kb": 64, 00:21:57.859 "state": "configuring", 00:21:57.859 "raid_level": "raid0", 00:21:57.859 "superblock": false, 00:21:57.859 "num_base_bdevs": 4, 00:21:57.859 "num_base_bdevs_discovered": 3, 00:21:57.859 "num_base_bdevs_operational": 4, 00:21:57.859 "base_bdevs_list": [ 00:21:57.859 { 00:21:57.859 "name": "BaseBdev1", 00:21:57.859 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:21:57.859 "is_configured": true, 00:21:57.859 "data_offset": 0, 00:21:57.859 "data_size": 65536 00:21:57.859 }, 00:21:57.859 { 00:21:57.859 "name": null, 00:21:57.859 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:57.859 "is_configured": false, 00:21:57.859 "data_offset": 0, 00:21:57.859 "data_size": 65536 00:21:57.859 }, 00:21:57.859 { 00:21:57.859 "name": "BaseBdev3", 00:21:57.859 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:57.859 "is_configured": true, 00:21:57.859 "data_offset": 0, 00:21:57.859 "data_size": 65536 00:21:57.859 }, 00:21:57.859 { 00:21:57.859 "name": "BaseBdev4", 00:21:57.859 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:57.859 "is_configured": true, 00:21:57.859 "data_offset": 0, 00:21:57.859 "data_size": 65536 00:21:57.859 } 00:21:57.859 ] 00:21:57.859 }' 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.859 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.424 [2024-10-07 07:42:57.769740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.424 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.424 "name": "Existed_Raid", 00:21:58.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.424 "strip_size_kb": 64, 00:21:58.424 "state": "configuring", 00:21:58.424 "raid_level": "raid0", 00:21:58.424 "superblock": false, 00:21:58.424 "num_base_bdevs": 4, 00:21:58.424 "num_base_bdevs_discovered": 2, 00:21:58.424 "num_base_bdevs_operational": 4, 00:21:58.424 "base_bdevs_list": [ 00:21:58.424 { 00:21:58.424 "name": "BaseBdev1", 00:21:58.424 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:21:58.424 "is_configured": true, 00:21:58.424 "data_offset": 0, 00:21:58.424 "data_size": 65536 00:21:58.425 }, 00:21:58.425 { 00:21:58.425 "name": null, 00:21:58.425 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:58.425 "is_configured": false, 00:21:58.425 "data_offset": 0, 00:21:58.425 "data_size": 65536 00:21:58.425 }, 00:21:58.425 { 00:21:58.425 "name": null, 00:21:58.425 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:58.425 "is_configured": false, 00:21:58.425 "data_offset": 0, 00:21:58.425 "data_size": 65536 00:21:58.425 }, 00:21:58.425 { 00:21:58.425 "name": "BaseBdev4", 00:21:58.425 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:58.425 "is_configured": true, 00:21:58.425 "data_offset": 0, 00:21:58.425 "data_size": 65536 00:21:58.425 } 00:21:58.425 ] 00:21:58.425 }' 00:21:58.425 07:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.425 07:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.682 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:58.682 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.682 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.683 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.942 [2024-10-07 07:42:58.294007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.942 "name": "Existed_Raid", 00:21:58.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.942 "strip_size_kb": 64, 00:21:58.942 "state": "configuring", 00:21:58.942 "raid_level": "raid0", 00:21:58.942 "superblock": false, 00:21:58.942 "num_base_bdevs": 4, 00:21:58.942 "num_base_bdevs_discovered": 3, 00:21:58.942 "num_base_bdevs_operational": 4, 00:21:58.942 "base_bdevs_list": [ 00:21:58.942 { 00:21:58.942 "name": "BaseBdev1", 00:21:58.942 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:21:58.942 "is_configured": true, 00:21:58.942 "data_offset": 0, 00:21:58.942 "data_size": 65536 00:21:58.942 }, 00:21:58.942 { 00:21:58.942 "name": null, 00:21:58.942 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:58.942 "is_configured": false, 00:21:58.942 "data_offset": 0, 00:21:58.942 "data_size": 65536 00:21:58.942 }, 00:21:58.942 { 00:21:58.942 "name": "BaseBdev3", 00:21:58.942 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:58.942 "is_configured": true, 00:21:58.942 "data_offset": 0, 00:21:58.942 "data_size": 65536 00:21:58.942 }, 00:21:58.942 { 00:21:58.942 "name": "BaseBdev4", 00:21:58.942 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:58.942 "is_configured": true, 00:21:58.942 "data_offset": 0, 00:21:58.942 "data_size": 65536 00:21:58.942 } 00:21:58.942 ] 00:21:58.942 }' 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.942 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.200 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:59.200 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.200 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:59.201 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.201 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:59.201 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:59.459 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:59.459 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.460 [2024-10-07 07:42:58.766112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.460 "name": "Existed_Raid", 00:21:59.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.460 "strip_size_kb": 64, 00:21:59.460 "state": "configuring", 00:21:59.460 "raid_level": "raid0", 00:21:59.460 "superblock": false, 00:21:59.460 "num_base_bdevs": 4, 00:21:59.460 "num_base_bdevs_discovered": 2, 00:21:59.460 "num_base_bdevs_operational": 4, 00:21:59.460 "base_bdevs_list": [ 00:21:59.460 { 00:21:59.460 "name": null, 00:21:59.460 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:21:59.460 "is_configured": false, 00:21:59.460 "data_offset": 0, 00:21:59.460 "data_size": 65536 00:21:59.460 }, 00:21:59.460 { 00:21:59.460 "name": null, 00:21:59.460 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:21:59.460 "is_configured": false, 00:21:59.460 "data_offset": 0, 00:21:59.460 "data_size": 65536 00:21:59.460 }, 00:21:59.460 { 00:21:59.460 "name": "BaseBdev3", 00:21:59.460 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:21:59.460 "is_configured": true, 00:21:59.460 "data_offset": 0, 00:21:59.460 "data_size": 65536 00:21:59.460 }, 00:21:59.460 { 00:21:59.460 "name": "BaseBdev4", 00:21:59.460 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:21:59.460 "is_configured": true, 00:21:59.460 "data_offset": 0, 00:21:59.460 "data_size": 65536 00:21:59.460 } 00:21:59.460 ] 00:21:59.460 }' 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.460 07:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.027 [2024-10-07 07:42:59.355328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.027 "name": "Existed_Raid", 00:22:00.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.027 "strip_size_kb": 64, 00:22:00.027 "state": "configuring", 00:22:00.027 "raid_level": "raid0", 00:22:00.027 "superblock": false, 00:22:00.027 "num_base_bdevs": 4, 00:22:00.027 "num_base_bdevs_discovered": 3, 00:22:00.027 "num_base_bdevs_operational": 4, 00:22:00.027 "base_bdevs_list": [ 00:22:00.027 { 00:22:00.027 "name": null, 00:22:00.027 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:22:00.027 "is_configured": false, 00:22:00.027 "data_offset": 0, 00:22:00.027 "data_size": 65536 00:22:00.027 }, 00:22:00.027 { 00:22:00.027 "name": "BaseBdev2", 00:22:00.027 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:22:00.027 "is_configured": true, 00:22:00.027 "data_offset": 0, 00:22:00.027 "data_size": 65536 00:22:00.027 }, 00:22:00.027 { 00:22:00.027 "name": "BaseBdev3", 00:22:00.027 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:22:00.027 "is_configured": true, 00:22:00.027 "data_offset": 0, 00:22:00.027 "data_size": 65536 00:22:00.027 }, 00:22:00.027 { 00:22:00.027 "name": "BaseBdev4", 00:22:00.027 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:22:00.027 "is_configured": true, 00:22:00.027 "data_offset": 0, 00:22:00.027 "data_size": 65536 00:22:00.027 } 00:22:00.027 ] 00:22:00.027 }' 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.027 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.286 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:00.286 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.286 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a752de6-cf0f-4ff9-b873-26f85b96bca8 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.545 [2024-10-07 07:42:59.962753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:00.545 [2024-10-07 07:42:59.963081] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:00.545 [2024-10-07 07:42:59.963106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:00.545 [2024-10-07 07:42:59.963465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:00.545 [2024-10-07 07:42:59.963658] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:00.545 [2024-10-07 07:42:59.963676] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:00.545 [2024-10-07 07:42:59.964055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.545 NewBaseBdev 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.545 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.546 [ 00:22:00.546 { 00:22:00.546 "name": "NewBaseBdev", 00:22:00.546 "aliases": [ 00:22:00.546 "9a752de6-cf0f-4ff9-b873-26f85b96bca8" 00:22:00.546 ], 00:22:00.546 "product_name": "Malloc disk", 00:22:00.546 "block_size": 512, 00:22:00.546 "num_blocks": 65536, 00:22:00.546 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:22:00.546 "assigned_rate_limits": { 00:22:00.546 "rw_ios_per_sec": 0, 00:22:00.546 "rw_mbytes_per_sec": 0, 00:22:00.546 "r_mbytes_per_sec": 0, 00:22:00.546 "w_mbytes_per_sec": 0 00:22:00.546 }, 00:22:00.546 "claimed": true, 00:22:00.546 "claim_type": "exclusive_write", 00:22:00.546 "zoned": false, 00:22:00.546 "supported_io_types": { 00:22:00.546 "read": true, 00:22:00.546 "write": true, 00:22:00.546 "unmap": true, 00:22:00.546 "flush": true, 00:22:00.546 "reset": true, 00:22:00.546 "nvme_admin": false, 00:22:00.546 "nvme_io": false, 00:22:00.546 "nvme_io_md": false, 00:22:00.546 "write_zeroes": true, 00:22:00.546 "zcopy": true, 00:22:00.546 "get_zone_info": false, 00:22:00.546 "zone_management": false, 00:22:00.546 "zone_append": false, 00:22:00.546 "compare": false, 00:22:00.546 "compare_and_write": false, 00:22:00.546 "abort": true, 00:22:00.546 "seek_hole": false, 00:22:00.546 "seek_data": false, 00:22:00.546 "copy": true, 00:22:00.546 "nvme_iov_md": false 00:22:00.546 }, 00:22:00.546 "memory_domains": [ 00:22:00.546 { 00:22:00.546 "dma_device_id": "system", 00:22:00.546 "dma_device_type": 1 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.546 "dma_device_type": 2 00:22:00.546 } 00:22:00.546 ], 00:22:00.546 "driver_specific": {} 00:22:00.546 } 00:22:00.546 ] 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.546 07:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.546 "name": "Existed_Raid", 00:22:00.546 "uuid": "671d291f-b990-4946-9fea-dcf876c29e24", 00:22:00.546 "strip_size_kb": 64, 00:22:00.546 "state": "online", 00:22:00.546 "raid_level": "raid0", 00:22:00.546 "superblock": false, 00:22:00.546 "num_base_bdevs": 4, 00:22:00.546 "num_base_bdevs_discovered": 4, 00:22:00.546 "num_base_bdevs_operational": 4, 00:22:00.546 "base_bdevs_list": [ 00:22:00.546 { 00:22:00.546 "name": "NewBaseBdev", 00:22:00.546 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 0, 00:22:00.546 "data_size": 65536 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "name": "BaseBdev2", 00:22:00.546 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 0, 00:22:00.546 "data_size": 65536 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "name": "BaseBdev3", 00:22:00.546 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 0, 00:22:00.546 "data_size": 65536 00:22:00.546 }, 00:22:00.546 { 00:22:00.546 "name": "BaseBdev4", 00:22:00.546 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:22:00.546 "is_configured": true, 00:22:00.546 "data_offset": 0, 00:22:00.546 "data_size": 65536 00:22:00.546 } 00:22:00.546 ] 00:22:00.546 }' 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.546 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.114 [2024-10-07 07:43:00.475409] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.114 "name": "Existed_Raid", 00:22:01.114 "aliases": [ 00:22:01.114 "671d291f-b990-4946-9fea-dcf876c29e24" 00:22:01.114 ], 00:22:01.114 "product_name": "Raid Volume", 00:22:01.114 "block_size": 512, 00:22:01.114 "num_blocks": 262144, 00:22:01.114 "uuid": "671d291f-b990-4946-9fea-dcf876c29e24", 00:22:01.114 "assigned_rate_limits": { 00:22:01.114 "rw_ios_per_sec": 0, 00:22:01.114 "rw_mbytes_per_sec": 0, 00:22:01.114 "r_mbytes_per_sec": 0, 00:22:01.114 "w_mbytes_per_sec": 0 00:22:01.114 }, 00:22:01.114 "claimed": false, 00:22:01.114 "zoned": false, 00:22:01.114 "supported_io_types": { 00:22:01.114 "read": true, 00:22:01.114 "write": true, 00:22:01.114 "unmap": true, 00:22:01.114 "flush": true, 00:22:01.114 "reset": true, 00:22:01.114 "nvme_admin": false, 00:22:01.114 "nvme_io": false, 00:22:01.114 "nvme_io_md": false, 00:22:01.114 "write_zeroes": true, 00:22:01.114 "zcopy": false, 00:22:01.114 "get_zone_info": false, 00:22:01.114 "zone_management": false, 00:22:01.114 "zone_append": false, 00:22:01.114 "compare": false, 00:22:01.114 "compare_and_write": false, 00:22:01.114 "abort": false, 00:22:01.114 "seek_hole": false, 00:22:01.114 "seek_data": false, 00:22:01.114 "copy": false, 00:22:01.114 "nvme_iov_md": false 00:22:01.114 }, 00:22:01.114 "memory_domains": [ 00:22:01.114 { 00:22:01.114 "dma_device_id": "system", 00:22:01.114 "dma_device_type": 1 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.114 "dma_device_type": 2 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "system", 00:22:01.114 "dma_device_type": 1 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.114 "dma_device_type": 2 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "system", 00:22:01.114 "dma_device_type": 1 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.114 "dma_device_type": 2 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "system", 00:22:01.114 "dma_device_type": 1 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.114 "dma_device_type": 2 00:22:01.114 } 00:22:01.114 ], 00:22:01.114 "driver_specific": { 00:22:01.114 "raid": { 00:22:01.114 "uuid": "671d291f-b990-4946-9fea-dcf876c29e24", 00:22:01.114 "strip_size_kb": 64, 00:22:01.114 "state": "online", 00:22:01.114 "raid_level": "raid0", 00:22:01.114 "superblock": false, 00:22:01.114 "num_base_bdevs": 4, 00:22:01.114 "num_base_bdevs_discovered": 4, 00:22:01.114 "num_base_bdevs_operational": 4, 00:22:01.114 "base_bdevs_list": [ 00:22:01.114 { 00:22:01.114 "name": "NewBaseBdev", 00:22:01.114 "uuid": "9a752de6-cf0f-4ff9-b873-26f85b96bca8", 00:22:01.114 "is_configured": true, 00:22:01.114 "data_offset": 0, 00:22:01.114 "data_size": 65536 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "name": "BaseBdev2", 00:22:01.114 "uuid": "6e283a35-45f0-4799-b304-11c9fd1951ec", 00:22:01.114 "is_configured": true, 00:22:01.114 "data_offset": 0, 00:22:01.114 "data_size": 65536 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "name": "BaseBdev3", 00:22:01.114 "uuid": "23f3d741-a229-4f03-a2a1-cf23262f6b8d", 00:22:01.114 "is_configured": true, 00:22:01.114 "data_offset": 0, 00:22:01.114 "data_size": 65536 00:22:01.114 }, 00:22:01.114 { 00:22:01.114 "name": "BaseBdev4", 00:22:01.114 "uuid": "ee32fb4d-586d-4cb6-b348-848c27809ed0", 00:22:01.114 "is_configured": true, 00:22:01.114 "data_offset": 0, 00:22:01.114 "data_size": 65536 00:22:01.114 } 00:22:01.114 ] 00:22:01.114 } 00:22:01.114 } 00:22:01.114 }' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:01.114 BaseBdev2 00:22:01.114 BaseBdev3 00:22:01.114 BaseBdev4' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.114 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:01.373 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.373 [2024-10-07 07:43:00.803139] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:01.373 [2024-10-07 07:43:00.803360] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.374 [2024-10-07 07:43:00.803557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.374 [2024-10-07 07:43:00.803774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.374 [2024-10-07 07:43:00.803903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69509 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 69509 ']' 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 69509 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 69509 00:22:01.374 killing process with pid 69509 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 69509' 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 69509 00:22:01.374 [2024-10-07 07:43:00.849796] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:01.374 07:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 69509 00:22:01.942 [2024-10-07 07:43:01.290787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:03.318 ************************************ 00:22:03.318 END TEST raid_state_function_test 00:22:03.318 ************************************ 00:22:03.318 07:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:03.318 00:22:03.318 real 0m12.445s 00:22:03.319 user 0m19.702s 00:22:03.319 sys 0m2.180s 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.319 07:43:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:03.319 07:43:02 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:22:03.319 07:43:02 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:03.319 07:43:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.319 ************************************ 00:22:03.319 START TEST raid_state_function_test_sb 00:22:03.319 ************************************ 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid0 4 true 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:03.319 Process raid pid: 70193 00:22:03.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70193 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70193' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70193 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 70193 ']' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:03.319 07:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.578 [2024-10-07 07:43:02.888866] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:03.578 [2024-10-07 07:43:02.889277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.578 [2024-10-07 07:43:03.074270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.837 [2024-10-07 07:43:03.320796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.096 [2024-10-07 07:43:03.570645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.096 [2024-10-07 07:43:03.570693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.355 [2024-10-07 07:43:03.810655] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:04.355 [2024-10-07 07:43:03.810881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:04.355 [2024-10-07 07:43:03.810993] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:04.355 [2024-10-07 07:43:03.811055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:04.355 [2024-10-07 07:43:03.811188] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:04.355 [2024-10-07 07:43:03.811300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:04.355 [2024-10-07 07:43:03.811393] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:04.355 [2024-10-07 07:43:03.811448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.355 "name": "Existed_Raid", 00:22:04.355 "uuid": "ff1c2d3a-1688-496a-8e46-6419d4cbf618", 00:22:04.355 "strip_size_kb": 64, 00:22:04.355 "state": "configuring", 00:22:04.355 "raid_level": "raid0", 00:22:04.355 "superblock": true, 00:22:04.355 "num_base_bdevs": 4, 00:22:04.355 "num_base_bdevs_discovered": 0, 00:22:04.355 "num_base_bdevs_operational": 4, 00:22:04.355 "base_bdevs_list": [ 00:22:04.355 { 00:22:04.355 "name": "BaseBdev1", 00:22:04.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.355 "is_configured": false, 00:22:04.355 "data_offset": 0, 00:22:04.355 "data_size": 0 00:22:04.355 }, 00:22:04.355 { 00:22:04.355 "name": "BaseBdev2", 00:22:04.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.355 "is_configured": false, 00:22:04.355 "data_offset": 0, 00:22:04.355 "data_size": 0 00:22:04.355 }, 00:22:04.355 { 00:22:04.355 "name": "BaseBdev3", 00:22:04.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.355 "is_configured": false, 00:22:04.355 "data_offset": 0, 00:22:04.355 "data_size": 0 00:22:04.355 }, 00:22:04.355 { 00:22:04.355 "name": "BaseBdev4", 00:22:04.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.355 "is_configured": false, 00:22:04.355 "data_offset": 0, 00:22:04.355 "data_size": 0 00:22:04.355 } 00:22:04.355 ] 00:22:04.355 }' 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.355 07:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.922 [2024-10-07 07:43:04.298679] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:04.922 [2024-10-07 07:43:04.298881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.922 [2024-10-07 07:43:04.306704] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:04.922 [2024-10-07 07:43:04.306901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:04.922 [2024-10-07 07:43:04.307007] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:04.922 [2024-10-07 07:43:04.307062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:04.922 [2024-10-07 07:43:04.307142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:04.922 [2024-10-07 07:43:04.307194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:04.922 [2024-10-07 07:43:04.307231] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:04.922 [2024-10-07 07:43:04.307315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:04.922 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.923 BaseBdev1 00:22:04.923 [2024-10-07 07:43:04.363928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.923 [ 00:22:04.923 { 00:22:04.923 "name": "BaseBdev1", 00:22:04.923 "aliases": [ 00:22:04.923 "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9" 00:22:04.923 ], 00:22:04.923 "product_name": "Malloc disk", 00:22:04.923 "block_size": 512, 00:22:04.923 "num_blocks": 65536, 00:22:04.923 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:04.923 "assigned_rate_limits": { 00:22:04.923 "rw_ios_per_sec": 0, 00:22:04.923 "rw_mbytes_per_sec": 0, 00:22:04.923 "r_mbytes_per_sec": 0, 00:22:04.923 "w_mbytes_per_sec": 0 00:22:04.923 }, 00:22:04.923 "claimed": true, 00:22:04.923 "claim_type": "exclusive_write", 00:22:04.923 "zoned": false, 00:22:04.923 "supported_io_types": { 00:22:04.923 "read": true, 00:22:04.923 "write": true, 00:22:04.923 "unmap": true, 00:22:04.923 "flush": true, 00:22:04.923 "reset": true, 00:22:04.923 "nvme_admin": false, 00:22:04.923 "nvme_io": false, 00:22:04.923 "nvme_io_md": false, 00:22:04.923 "write_zeroes": true, 00:22:04.923 "zcopy": true, 00:22:04.923 "get_zone_info": false, 00:22:04.923 "zone_management": false, 00:22:04.923 "zone_append": false, 00:22:04.923 "compare": false, 00:22:04.923 "compare_and_write": false, 00:22:04.923 "abort": true, 00:22:04.923 "seek_hole": false, 00:22:04.923 "seek_data": false, 00:22:04.923 "copy": true, 00:22:04.923 "nvme_iov_md": false 00:22:04.923 }, 00:22:04.923 "memory_domains": [ 00:22:04.923 { 00:22:04.923 "dma_device_id": "system", 00:22:04.923 "dma_device_type": 1 00:22:04.923 }, 00:22:04.923 { 00:22:04.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.923 "dma_device_type": 2 00:22:04.923 } 00:22:04.923 ], 00:22:04.923 "driver_specific": {} 00:22:04.923 } 00:22:04.923 ] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.923 "name": "Existed_Raid", 00:22:04.923 "uuid": "af245979-7bef-4c79-9559-adb60051b8a6", 00:22:04.923 "strip_size_kb": 64, 00:22:04.923 "state": "configuring", 00:22:04.923 "raid_level": "raid0", 00:22:04.923 "superblock": true, 00:22:04.923 "num_base_bdevs": 4, 00:22:04.923 "num_base_bdevs_discovered": 1, 00:22:04.923 "num_base_bdevs_operational": 4, 00:22:04.923 "base_bdevs_list": [ 00:22:04.923 { 00:22:04.923 "name": "BaseBdev1", 00:22:04.923 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:04.923 "is_configured": true, 00:22:04.923 "data_offset": 2048, 00:22:04.923 "data_size": 63488 00:22:04.923 }, 00:22:04.923 { 00:22:04.923 "name": "BaseBdev2", 00:22:04.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.923 "is_configured": false, 00:22:04.923 "data_offset": 0, 00:22:04.923 "data_size": 0 00:22:04.923 }, 00:22:04.923 { 00:22:04.923 "name": "BaseBdev3", 00:22:04.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.923 "is_configured": false, 00:22:04.923 "data_offset": 0, 00:22:04.923 "data_size": 0 00:22:04.923 }, 00:22:04.923 { 00:22:04.923 "name": "BaseBdev4", 00:22:04.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.923 "is_configured": false, 00:22:04.923 "data_offset": 0, 00:22:04.923 "data_size": 0 00:22:04.923 } 00:22:04.923 ] 00:22:04.923 }' 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.923 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.530 [2024-10-07 07:43:04.888123] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:05.530 [2024-10-07 07:43:04.888196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.530 [2024-10-07 07:43:04.896174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.530 [2024-10-07 07:43:04.898596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.530 [2024-10-07 07:43:04.898823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.530 [2024-10-07 07:43:04.898929] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.530 [2024-10-07 07:43:04.898963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.530 [2024-10-07 07:43:04.898974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:05.530 [2024-10-07 07:43:04.898990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.530 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.531 "name": "Existed_Raid", 00:22:05.531 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:05.531 "strip_size_kb": 64, 00:22:05.531 "state": "configuring", 00:22:05.531 "raid_level": "raid0", 00:22:05.531 "superblock": true, 00:22:05.531 "num_base_bdevs": 4, 00:22:05.531 "num_base_bdevs_discovered": 1, 00:22:05.531 "num_base_bdevs_operational": 4, 00:22:05.531 "base_bdevs_list": [ 00:22:05.531 { 00:22:05.531 "name": "BaseBdev1", 00:22:05.531 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:05.531 "is_configured": true, 00:22:05.531 "data_offset": 2048, 00:22:05.531 "data_size": 63488 00:22:05.531 }, 00:22:05.531 { 00:22:05.531 "name": "BaseBdev2", 00:22:05.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.531 "is_configured": false, 00:22:05.531 "data_offset": 0, 00:22:05.531 "data_size": 0 00:22:05.531 }, 00:22:05.531 { 00:22:05.531 "name": "BaseBdev3", 00:22:05.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.531 "is_configured": false, 00:22:05.531 "data_offset": 0, 00:22:05.531 "data_size": 0 00:22:05.531 }, 00:22:05.531 { 00:22:05.531 "name": "BaseBdev4", 00:22:05.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.531 "is_configured": false, 00:22:05.531 "data_offset": 0, 00:22:05.531 "data_size": 0 00:22:05.531 } 00:22:05.531 ] 00:22:05.531 }' 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.531 07:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.104 BaseBdev2 00:22:06.104 [2024-10-07 07:43:05.399919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.104 [ 00:22:06.104 { 00:22:06.104 "name": "BaseBdev2", 00:22:06.104 "aliases": [ 00:22:06.104 "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa" 00:22:06.104 ], 00:22:06.104 "product_name": "Malloc disk", 00:22:06.104 "block_size": 512, 00:22:06.104 "num_blocks": 65536, 00:22:06.104 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:06.104 "assigned_rate_limits": { 00:22:06.104 "rw_ios_per_sec": 0, 00:22:06.104 "rw_mbytes_per_sec": 0, 00:22:06.104 "r_mbytes_per_sec": 0, 00:22:06.104 "w_mbytes_per_sec": 0 00:22:06.104 }, 00:22:06.104 "claimed": true, 00:22:06.104 "claim_type": "exclusive_write", 00:22:06.104 "zoned": false, 00:22:06.104 "supported_io_types": { 00:22:06.104 "read": true, 00:22:06.104 "write": true, 00:22:06.104 "unmap": true, 00:22:06.104 "flush": true, 00:22:06.104 "reset": true, 00:22:06.104 "nvme_admin": false, 00:22:06.104 "nvme_io": false, 00:22:06.104 "nvme_io_md": false, 00:22:06.104 "write_zeroes": true, 00:22:06.104 "zcopy": true, 00:22:06.104 "get_zone_info": false, 00:22:06.104 "zone_management": false, 00:22:06.104 "zone_append": false, 00:22:06.104 "compare": false, 00:22:06.104 "compare_and_write": false, 00:22:06.104 "abort": true, 00:22:06.104 "seek_hole": false, 00:22:06.104 "seek_data": false, 00:22:06.104 "copy": true, 00:22:06.104 "nvme_iov_md": false 00:22:06.104 }, 00:22:06.104 "memory_domains": [ 00:22:06.104 { 00:22:06.104 "dma_device_id": "system", 00:22:06.104 "dma_device_type": 1 00:22:06.104 }, 00:22:06.104 { 00:22:06.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.104 "dma_device_type": 2 00:22:06.104 } 00:22:06.104 ], 00:22:06.104 "driver_specific": {} 00:22:06.104 } 00:22:06.104 ] 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:06.104 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.105 "name": "Existed_Raid", 00:22:06.105 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:06.105 "strip_size_kb": 64, 00:22:06.105 "state": "configuring", 00:22:06.105 "raid_level": "raid0", 00:22:06.105 "superblock": true, 00:22:06.105 "num_base_bdevs": 4, 00:22:06.105 "num_base_bdevs_discovered": 2, 00:22:06.105 "num_base_bdevs_operational": 4, 00:22:06.105 "base_bdevs_list": [ 00:22:06.105 { 00:22:06.105 "name": "BaseBdev1", 00:22:06.105 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:06.105 "is_configured": true, 00:22:06.105 "data_offset": 2048, 00:22:06.105 "data_size": 63488 00:22:06.105 }, 00:22:06.105 { 00:22:06.105 "name": "BaseBdev2", 00:22:06.105 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:06.105 "is_configured": true, 00:22:06.105 "data_offset": 2048, 00:22:06.105 "data_size": 63488 00:22:06.105 }, 00:22:06.105 { 00:22:06.105 "name": "BaseBdev3", 00:22:06.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.105 "is_configured": false, 00:22:06.105 "data_offset": 0, 00:22:06.105 "data_size": 0 00:22:06.105 }, 00:22:06.105 { 00:22:06.105 "name": "BaseBdev4", 00:22:06.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.105 "is_configured": false, 00:22:06.105 "data_offset": 0, 00:22:06.105 "data_size": 0 00:22:06.105 } 00:22:06.105 ] 00:22:06.105 }' 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.105 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.364 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:06.364 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.364 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.622 [2024-10-07 07:43:05.958774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.623 BaseBdev3 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.623 [ 00:22:06.623 { 00:22:06.623 "name": "BaseBdev3", 00:22:06.623 "aliases": [ 00:22:06.623 "9540d3fb-3303-4ed8-b496-2173447b8608" 00:22:06.623 ], 00:22:06.623 "product_name": "Malloc disk", 00:22:06.623 "block_size": 512, 00:22:06.623 "num_blocks": 65536, 00:22:06.623 "uuid": "9540d3fb-3303-4ed8-b496-2173447b8608", 00:22:06.623 "assigned_rate_limits": { 00:22:06.623 "rw_ios_per_sec": 0, 00:22:06.623 "rw_mbytes_per_sec": 0, 00:22:06.623 "r_mbytes_per_sec": 0, 00:22:06.623 "w_mbytes_per_sec": 0 00:22:06.623 }, 00:22:06.623 "claimed": true, 00:22:06.623 "claim_type": "exclusive_write", 00:22:06.623 "zoned": false, 00:22:06.623 "supported_io_types": { 00:22:06.623 "read": true, 00:22:06.623 "write": true, 00:22:06.623 "unmap": true, 00:22:06.623 "flush": true, 00:22:06.623 "reset": true, 00:22:06.623 "nvme_admin": false, 00:22:06.623 "nvme_io": false, 00:22:06.623 "nvme_io_md": false, 00:22:06.623 "write_zeroes": true, 00:22:06.623 "zcopy": true, 00:22:06.623 "get_zone_info": false, 00:22:06.623 "zone_management": false, 00:22:06.623 "zone_append": false, 00:22:06.623 "compare": false, 00:22:06.623 "compare_and_write": false, 00:22:06.623 "abort": true, 00:22:06.623 "seek_hole": false, 00:22:06.623 "seek_data": false, 00:22:06.623 "copy": true, 00:22:06.623 "nvme_iov_md": false 00:22:06.623 }, 00:22:06.623 "memory_domains": [ 00:22:06.623 { 00:22:06.623 "dma_device_id": "system", 00:22:06.623 "dma_device_type": 1 00:22:06.623 }, 00:22:06.623 { 00:22:06.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.623 "dma_device_type": 2 00:22:06.623 } 00:22:06.623 ], 00:22:06.623 "driver_specific": {} 00:22:06.623 } 00:22:06.623 ] 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:06.623 07:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.623 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:06.623 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.623 "name": "Existed_Raid", 00:22:06.623 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:06.623 "strip_size_kb": 64, 00:22:06.623 "state": "configuring", 00:22:06.623 "raid_level": "raid0", 00:22:06.623 "superblock": true, 00:22:06.623 "num_base_bdevs": 4, 00:22:06.623 "num_base_bdevs_discovered": 3, 00:22:06.623 "num_base_bdevs_operational": 4, 00:22:06.623 "base_bdevs_list": [ 00:22:06.623 { 00:22:06.623 "name": "BaseBdev1", 00:22:06.623 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:06.623 "is_configured": true, 00:22:06.623 "data_offset": 2048, 00:22:06.623 "data_size": 63488 00:22:06.623 }, 00:22:06.623 { 00:22:06.623 "name": "BaseBdev2", 00:22:06.623 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:06.623 "is_configured": true, 00:22:06.623 "data_offset": 2048, 00:22:06.623 "data_size": 63488 00:22:06.623 }, 00:22:06.623 { 00:22:06.623 "name": "BaseBdev3", 00:22:06.623 "uuid": "9540d3fb-3303-4ed8-b496-2173447b8608", 00:22:06.623 "is_configured": true, 00:22:06.623 "data_offset": 2048, 00:22:06.623 "data_size": 63488 00:22:06.623 }, 00:22:06.623 { 00:22:06.623 "name": "BaseBdev4", 00:22:06.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.623 "is_configured": false, 00:22:06.623 "data_offset": 0, 00:22:06.623 "data_size": 0 00:22:06.623 } 00:22:06.623 ] 00:22:06.623 }' 00:22:06.623 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.623 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.192 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:07.192 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.192 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.192 [2024-10-07 07:43:06.491742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:07.192 BaseBdev4 00:22:07.192 [2024-10-07 07:43:06.492048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:07.192 [2024-10-07 07:43:06.492067] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:07.192 [2024-10-07 07:43:06.492381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:07.192 [2024-10-07 07:43:06.492559] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:07.193 [2024-10-07 07:43:06.492576] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:07.193 [2024-10-07 07:43:06.492798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.193 [ 00:22:07.193 { 00:22:07.193 "name": "BaseBdev4", 00:22:07.193 "aliases": [ 00:22:07.193 "fcffbdb6-7745-4770-be71-88713d0d506d" 00:22:07.193 ], 00:22:07.193 "product_name": "Malloc disk", 00:22:07.193 "block_size": 512, 00:22:07.193 "num_blocks": 65536, 00:22:07.193 "uuid": "fcffbdb6-7745-4770-be71-88713d0d506d", 00:22:07.193 "assigned_rate_limits": { 00:22:07.193 "rw_ios_per_sec": 0, 00:22:07.193 "rw_mbytes_per_sec": 0, 00:22:07.193 "r_mbytes_per_sec": 0, 00:22:07.193 "w_mbytes_per_sec": 0 00:22:07.193 }, 00:22:07.193 "claimed": true, 00:22:07.193 "claim_type": "exclusive_write", 00:22:07.193 "zoned": false, 00:22:07.193 "supported_io_types": { 00:22:07.193 "read": true, 00:22:07.193 "write": true, 00:22:07.193 "unmap": true, 00:22:07.193 "flush": true, 00:22:07.193 "reset": true, 00:22:07.193 "nvme_admin": false, 00:22:07.193 "nvme_io": false, 00:22:07.193 "nvme_io_md": false, 00:22:07.193 "write_zeroes": true, 00:22:07.193 "zcopy": true, 00:22:07.193 "get_zone_info": false, 00:22:07.193 "zone_management": false, 00:22:07.193 "zone_append": false, 00:22:07.193 "compare": false, 00:22:07.193 "compare_and_write": false, 00:22:07.193 "abort": true, 00:22:07.193 "seek_hole": false, 00:22:07.193 "seek_data": false, 00:22:07.193 "copy": true, 00:22:07.193 "nvme_iov_md": false 00:22:07.193 }, 00:22:07.193 "memory_domains": [ 00:22:07.193 { 00:22:07.193 "dma_device_id": "system", 00:22:07.193 "dma_device_type": 1 00:22:07.193 }, 00:22:07.193 { 00:22:07.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.193 "dma_device_type": 2 00:22:07.193 } 00:22:07.193 ], 00:22:07.193 "driver_specific": {} 00:22:07.193 } 00:22:07.193 ] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.193 "name": "Existed_Raid", 00:22:07.193 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:07.193 "strip_size_kb": 64, 00:22:07.193 "state": "online", 00:22:07.193 "raid_level": "raid0", 00:22:07.193 "superblock": true, 00:22:07.193 "num_base_bdevs": 4, 00:22:07.193 "num_base_bdevs_discovered": 4, 00:22:07.193 "num_base_bdevs_operational": 4, 00:22:07.193 "base_bdevs_list": [ 00:22:07.193 { 00:22:07.193 "name": "BaseBdev1", 00:22:07.193 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:07.193 "is_configured": true, 00:22:07.193 "data_offset": 2048, 00:22:07.193 "data_size": 63488 00:22:07.193 }, 00:22:07.193 { 00:22:07.193 "name": "BaseBdev2", 00:22:07.193 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:07.193 "is_configured": true, 00:22:07.193 "data_offset": 2048, 00:22:07.193 "data_size": 63488 00:22:07.193 }, 00:22:07.193 { 00:22:07.193 "name": "BaseBdev3", 00:22:07.193 "uuid": "9540d3fb-3303-4ed8-b496-2173447b8608", 00:22:07.193 "is_configured": true, 00:22:07.193 "data_offset": 2048, 00:22:07.193 "data_size": 63488 00:22:07.193 }, 00:22:07.193 { 00:22:07.193 "name": "BaseBdev4", 00:22:07.193 "uuid": "fcffbdb6-7745-4770-be71-88713d0d506d", 00:22:07.193 "is_configured": true, 00:22:07.193 "data_offset": 2048, 00:22:07.193 "data_size": 63488 00:22:07.193 } 00:22:07.193 ] 00:22:07.193 }' 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.193 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.452 [2024-10-07 07:43:06.968280] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:07.452 07:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.711 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:07.711 "name": "Existed_Raid", 00:22:07.711 "aliases": [ 00:22:07.711 "c299fc9e-c8bb-4204-81d5-2155abda2f9f" 00:22:07.711 ], 00:22:07.711 "product_name": "Raid Volume", 00:22:07.711 "block_size": 512, 00:22:07.711 "num_blocks": 253952, 00:22:07.711 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:07.711 "assigned_rate_limits": { 00:22:07.711 "rw_ios_per_sec": 0, 00:22:07.711 "rw_mbytes_per_sec": 0, 00:22:07.711 "r_mbytes_per_sec": 0, 00:22:07.711 "w_mbytes_per_sec": 0 00:22:07.711 }, 00:22:07.711 "claimed": false, 00:22:07.711 "zoned": false, 00:22:07.711 "supported_io_types": { 00:22:07.711 "read": true, 00:22:07.711 "write": true, 00:22:07.711 "unmap": true, 00:22:07.711 "flush": true, 00:22:07.711 "reset": true, 00:22:07.711 "nvme_admin": false, 00:22:07.711 "nvme_io": false, 00:22:07.711 "nvme_io_md": false, 00:22:07.711 "write_zeroes": true, 00:22:07.711 "zcopy": false, 00:22:07.711 "get_zone_info": false, 00:22:07.711 "zone_management": false, 00:22:07.711 "zone_append": false, 00:22:07.711 "compare": false, 00:22:07.711 "compare_and_write": false, 00:22:07.711 "abort": false, 00:22:07.711 "seek_hole": false, 00:22:07.711 "seek_data": false, 00:22:07.711 "copy": false, 00:22:07.711 "nvme_iov_md": false 00:22:07.711 }, 00:22:07.711 "memory_domains": [ 00:22:07.711 { 00:22:07.711 "dma_device_id": "system", 00:22:07.711 "dma_device_type": 1 00:22:07.711 }, 00:22:07.711 { 00:22:07.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.711 "dma_device_type": 2 00:22:07.711 }, 00:22:07.711 { 00:22:07.711 "dma_device_id": "system", 00:22:07.711 "dma_device_type": 1 00:22:07.711 }, 00:22:07.711 { 00:22:07.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.711 "dma_device_type": 2 00:22:07.711 }, 00:22:07.711 { 00:22:07.711 "dma_device_id": "system", 00:22:07.711 "dma_device_type": 1 00:22:07.711 }, 00:22:07.711 { 00:22:07.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.712 "dma_device_type": 2 00:22:07.712 }, 00:22:07.712 { 00:22:07.712 "dma_device_id": "system", 00:22:07.712 "dma_device_type": 1 00:22:07.712 }, 00:22:07.712 { 00:22:07.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.712 "dma_device_type": 2 00:22:07.712 } 00:22:07.712 ], 00:22:07.712 "driver_specific": { 00:22:07.712 "raid": { 00:22:07.712 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:07.712 "strip_size_kb": 64, 00:22:07.712 "state": "online", 00:22:07.712 "raid_level": "raid0", 00:22:07.712 "superblock": true, 00:22:07.712 "num_base_bdevs": 4, 00:22:07.712 "num_base_bdevs_discovered": 4, 00:22:07.712 "num_base_bdevs_operational": 4, 00:22:07.712 "base_bdevs_list": [ 00:22:07.712 { 00:22:07.712 "name": "BaseBdev1", 00:22:07.712 "uuid": "d26dec86-5c04-4f73-8d3d-fbdbe14a3ba9", 00:22:07.712 "is_configured": true, 00:22:07.712 "data_offset": 2048, 00:22:07.712 "data_size": 63488 00:22:07.712 }, 00:22:07.712 { 00:22:07.712 "name": "BaseBdev2", 00:22:07.712 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:07.712 "is_configured": true, 00:22:07.712 "data_offset": 2048, 00:22:07.712 "data_size": 63488 00:22:07.712 }, 00:22:07.712 { 00:22:07.712 "name": "BaseBdev3", 00:22:07.712 "uuid": "9540d3fb-3303-4ed8-b496-2173447b8608", 00:22:07.712 "is_configured": true, 00:22:07.712 "data_offset": 2048, 00:22:07.712 "data_size": 63488 00:22:07.712 }, 00:22:07.712 { 00:22:07.712 "name": "BaseBdev4", 00:22:07.712 "uuid": "fcffbdb6-7745-4770-be71-88713d0d506d", 00:22:07.712 "is_configured": true, 00:22:07.712 "data_offset": 2048, 00:22:07.712 "data_size": 63488 00:22:07.712 } 00:22:07.712 ] 00:22:07.712 } 00:22:07.712 } 00:22:07.712 }' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:07.712 BaseBdev2 00:22:07.712 BaseBdev3 00:22:07.712 BaseBdev4' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.971 [2024-10-07 07:43:07.304072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.971 [2024-10-07 07:43:07.304253] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.971 [2024-10-07 07:43:07.304416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.971 "name": "Existed_Raid", 00:22:07.971 "uuid": "c299fc9e-c8bb-4204-81d5-2155abda2f9f", 00:22:07.971 "strip_size_kb": 64, 00:22:07.971 "state": "offline", 00:22:07.971 "raid_level": "raid0", 00:22:07.971 "superblock": true, 00:22:07.971 "num_base_bdevs": 4, 00:22:07.971 "num_base_bdevs_discovered": 3, 00:22:07.971 "num_base_bdevs_operational": 3, 00:22:07.971 "base_bdevs_list": [ 00:22:07.971 { 00:22:07.971 "name": null, 00:22:07.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.971 "is_configured": false, 00:22:07.971 "data_offset": 0, 00:22:07.971 "data_size": 63488 00:22:07.971 }, 00:22:07.971 { 00:22:07.971 "name": "BaseBdev2", 00:22:07.971 "uuid": "9c6d2b1a-38a3-4d4f-ac31-f73a512d62fa", 00:22:07.971 "is_configured": true, 00:22:07.971 "data_offset": 2048, 00:22:07.971 "data_size": 63488 00:22:07.971 }, 00:22:07.971 { 00:22:07.971 "name": "BaseBdev3", 00:22:07.971 "uuid": "9540d3fb-3303-4ed8-b496-2173447b8608", 00:22:07.971 "is_configured": true, 00:22:07.971 "data_offset": 2048, 00:22:07.971 "data_size": 63488 00:22:07.971 }, 00:22:07.971 { 00:22:07.971 "name": "BaseBdev4", 00:22:07.971 "uuid": "fcffbdb6-7745-4770-be71-88713d0d506d", 00:22:07.971 "is_configured": true, 00:22:07.971 "data_offset": 2048, 00:22:07.971 "data_size": 63488 00:22:07.971 } 00:22:07.971 ] 00:22:07.971 }' 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.971 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.538 [2024-10-07 07:43:07.850654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.538 07:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.538 [2024-10-07 07:43:07.988491] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.538 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 [2024-10-07 07:43:08.147541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:08.798 [2024-10-07 07:43:08.147786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 BaseBdev2 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:08.798 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.798 [ 00:22:08.798 { 00:22:08.798 "name": "BaseBdev2", 00:22:08.798 "aliases": [ 00:22:08.798 "4ec2b5de-bccd-4056-b0ff-ac39922e881a" 00:22:08.798 ], 00:22:08.798 "product_name": "Malloc disk", 00:22:08.798 "block_size": 512, 00:22:08.798 "num_blocks": 65536, 00:22:08.798 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:08.798 "assigned_rate_limits": { 00:22:08.798 "rw_ios_per_sec": 0, 00:22:08.798 "rw_mbytes_per_sec": 0, 00:22:08.798 "r_mbytes_per_sec": 0, 00:22:08.798 "w_mbytes_per_sec": 0 00:22:08.798 }, 00:22:08.798 "claimed": false, 00:22:08.798 "zoned": false, 00:22:08.798 "supported_io_types": { 00:22:08.798 "read": true, 00:22:08.798 "write": true, 00:22:08.798 "unmap": true, 00:22:08.798 "flush": true, 00:22:08.798 "reset": true, 00:22:08.798 "nvme_admin": false, 00:22:08.798 "nvme_io": false, 00:22:09.057 "nvme_io_md": false, 00:22:09.057 "write_zeroes": true, 00:22:09.057 "zcopy": true, 00:22:09.057 "get_zone_info": false, 00:22:09.057 "zone_management": false, 00:22:09.057 "zone_append": false, 00:22:09.057 "compare": false, 00:22:09.057 "compare_and_write": false, 00:22:09.057 "abort": true, 00:22:09.057 "seek_hole": false, 00:22:09.057 "seek_data": false, 00:22:09.057 "copy": true, 00:22:09.057 "nvme_iov_md": false 00:22:09.057 }, 00:22:09.057 "memory_domains": [ 00:22:09.057 { 00:22:09.057 "dma_device_id": "system", 00:22:09.057 "dma_device_type": 1 00:22:09.057 }, 00:22:09.057 { 00:22:09.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.057 "dma_device_type": 2 00:22:09.057 } 00:22:09.057 ], 00:22:09.057 "driver_specific": {} 00:22:09.057 } 00:22:09.057 ] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.057 BaseBdev3 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.057 [ 00:22:09.057 { 00:22:09.057 "name": "BaseBdev3", 00:22:09.057 "aliases": [ 00:22:09.057 "f5dde530-cc61-4cc2-8d0e-a0488dce34aa" 00:22:09.057 ], 00:22:09.057 "product_name": "Malloc disk", 00:22:09.057 "block_size": 512, 00:22:09.057 "num_blocks": 65536, 00:22:09.057 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:09.057 "assigned_rate_limits": { 00:22:09.057 "rw_ios_per_sec": 0, 00:22:09.057 "rw_mbytes_per_sec": 0, 00:22:09.057 "r_mbytes_per_sec": 0, 00:22:09.057 "w_mbytes_per_sec": 0 00:22:09.057 }, 00:22:09.057 "claimed": false, 00:22:09.057 "zoned": false, 00:22:09.057 "supported_io_types": { 00:22:09.057 "read": true, 00:22:09.057 "write": true, 00:22:09.057 "unmap": true, 00:22:09.057 "flush": true, 00:22:09.057 "reset": true, 00:22:09.057 "nvme_admin": false, 00:22:09.057 "nvme_io": false, 00:22:09.057 "nvme_io_md": false, 00:22:09.057 "write_zeroes": true, 00:22:09.057 "zcopy": true, 00:22:09.057 "get_zone_info": false, 00:22:09.057 "zone_management": false, 00:22:09.057 "zone_append": false, 00:22:09.057 "compare": false, 00:22:09.057 "compare_and_write": false, 00:22:09.057 "abort": true, 00:22:09.057 "seek_hole": false, 00:22:09.057 "seek_data": false, 00:22:09.057 "copy": true, 00:22:09.057 "nvme_iov_md": false 00:22:09.057 }, 00:22:09.057 "memory_domains": [ 00:22:09.057 { 00:22:09.057 "dma_device_id": "system", 00:22:09.057 "dma_device_type": 1 00:22:09.057 }, 00:22:09.057 { 00:22:09.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.057 "dma_device_type": 2 00:22:09.057 } 00:22:09.057 ], 00:22:09.057 "driver_specific": {} 00:22:09.057 } 00:22:09.057 ] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.057 BaseBdev4 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:09.057 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 [ 00:22:09.058 { 00:22:09.058 "name": "BaseBdev4", 00:22:09.058 "aliases": [ 00:22:09.058 "c63891a4-2b4d-4f6a-a943-5c8378867ecd" 00:22:09.058 ], 00:22:09.058 "product_name": "Malloc disk", 00:22:09.058 "block_size": 512, 00:22:09.058 "num_blocks": 65536, 00:22:09.058 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:09.058 "assigned_rate_limits": { 00:22:09.058 "rw_ios_per_sec": 0, 00:22:09.058 "rw_mbytes_per_sec": 0, 00:22:09.058 "r_mbytes_per_sec": 0, 00:22:09.058 "w_mbytes_per_sec": 0 00:22:09.058 }, 00:22:09.058 "claimed": false, 00:22:09.058 "zoned": false, 00:22:09.058 "supported_io_types": { 00:22:09.058 "read": true, 00:22:09.058 "write": true, 00:22:09.058 "unmap": true, 00:22:09.058 "flush": true, 00:22:09.058 "reset": true, 00:22:09.058 "nvme_admin": false, 00:22:09.058 "nvme_io": false, 00:22:09.058 "nvme_io_md": false, 00:22:09.058 "write_zeroes": true, 00:22:09.058 "zcopy": true, 00:22:09.058 "get_zone_info": false, 00:22:09.058 "zone_management": false, 00:22:09.058 "zone_append": false, 00:22:09.058 "compare": false, 00:22:09.058 "compare_and_write": false, 00:22:09.058 "abort": true, 00:22:09.058 "seek_hole": false, 00:22:09.058 "seek_data": false, 00:22:09.058 "copy": true, 00:22:09.058 "nvme_iov_md": false 00:22:09.058 }, 00:22:09.058 "memory_domains": [ 00:22:09.058 { 00:22:09.058 "dma_device_id": "system", 00:22:09.058 "dma_device_type": 1 00:22:09.058 }, 00:22:09.058 { 00:22:09.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.058 "dma_device_type": 2 00:22:09.058 } 00:22:09.058 ], 00:22:09.058 "driver_specific": {} 00:22:09.058 } 00:22:09.058 ] 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 [2024-10-07 07:43:08.517634] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.058 [2024-10-07 07:43:08.517828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.058 [2024-10-07 07:43:08.517878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.058 [2024-10-07 07:43:08.520046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.058 [2024-10-07 07:43:08.520110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.058 "name": "Existed_Raid", 00:22:09.058 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:09.058 "strip_size_kb": 64, 00:22:09.058 "state": "configuring", 00:22:09.058 "raid_level": "raid0", 00:22:09.058 "superblock": true, 00:22:09.058 "num_base_bdevs": 4, 00:22:09.058 "num_base_bdevs_discovered": 3, 00:22:09.058 "num_base_bdevs_operational": 4, 00:22:09.058 "base_bdevs_list": [ 00:22:09.058 { 00:22:09.058 "name": "BaseBdev1", 00:22:09.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.058 "is_configured": false, 00:22:09.058 "data_offset": 0, 00:22:09.058 "data_size": 0 00:22:09.058 }, 00:22:09.058 { 00:22:09.058 "name": "BaseBdev2", 00:22:09.058 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:09.058 "is_configured": true, 00:22:09.058 "data_offset": 2048, 00:22:09.058 "data_size": 63488 00:22:09.058 }, 00:22:09.058 { 00:22:09.058 "name": "BaseBdev3", 00:22:09.058 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:09.058 "is_configured": true, 00:22:09.058 "data_offset": 2048, 00:22:09.058 "data_size": 63488 00:22:09.058 }, 00:22:09.058 { 00:22:09.058 "name": "BaseBdev4", 00:22:09.058 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:09.058 "is_configured": true, 00:22:09.058 "data_offset": 2048, 00:22:09.058 "data_size": 63488 00:22:09.058 } 00:22:09.058 ] 00:22:09.058 }' 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.058 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.626 [2024-10-07 07:43:08.973744] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.626 07:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:09.626 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.626 "name": "Existed_Raid", 00:22:09.626 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:09.626 "strip_size_kb": 64, 00:22:09.626 "state": "configuring", 00:22:09.626 "raid_level": "raid0", 00:22:09.626 "superblock": true, 00:22:09.626 "num_base_bdevs": 4, 00:22:09.626 "num_base_bdevs_discovered": 2, 00:22:09.626 "num_base_bdevs_operational": 4, 00:22:09.626 "base_bdevs_list": [ 00:22:09.626 { 00:22:09.626 "name": "BaseBdev1", 00:22:09.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.626 "is_configured": false, 00:22:09.626 "data_offset": 0, 00:22:09.626 "data_size": 0 00:22:09.627 }, 00:22:09.627 { 00:22:09.627 "name": null, 00:22:09.627 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:09.627 "is_configured": false, 00:22:09.627 "data_offset": 0, 00:22:09.627 "data_size": 63488 00:22:09.627 }, 00:22:09.627 { 00:22:09.627 "name": "BaseBdev3", 00:22:09.627 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:09.627 "is_configured": true, 00:22:09.627 "data_offset": 2048, 00:22:09.627 "data_size": 63488 00:22:09.627 }, 00:22:09.627 { 00:22:09.627 "name": "BaseBdev4", 00:22:09.627 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:09.627 "is_configured": true, 00:22:09.627 "data_offset": 2048, 00:22:09.627 "data_size": 63488 00:22:09.627 } 00:22:09.627 ] 00:22:09.627 }' 00:22:09.627 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.627 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.885 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.885 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:09.885 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:09.885 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.144 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.144 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:10.144 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:10.144 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.144 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.145 [2024-10-07 07:43:09.524609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.145 BaseBdev1 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.145 [ 00:22:10.145 { 00:22:10.145 "name": "BaseBdev1", 00:22:10.145 "aliases": [ 00:22:10.145 "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb" 00:22:10.145 ], 00:22:10.145 "product_name": "Malloc disk", 00:22:10.145 "block_size": 512, 00:22:10.145 "num_blocks": 65536, 00:22:10.145 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:10.145 "assigned_rate_limits": { 00:22:10.145 "rw_ios_per_sec": 0, 00:22:10.145 "rw_mbytes_per_sec": 0, 00:22:10.145 "r_mbytes_per_sec": 0, 00:22:10.145 "w_mbytes_per_sec": 0 00:22:10.145 }, 00:22:10.145 "claimed": true, 00:22:10.145 "claim_type": "exclusive_write", 00:22:10.145 "zoned": false, 00:22:10.145 "supported_io_types": { 00:22:10.145 "read": true, 00:22:10.145 "write": true, 00:22:10.145 "unmap": true, 00:22:10.145 "flush": true, 00:22:10.145 "reset": true, 00:22:10.145 "nvme_admin": false, 00:22:10.145 "nvme_io": false, 00:22:10.145 "nvme_io_md": false, 00:22:10.145 "write_zeroes": true, 00:22:10.145 "zcopy": true, 00:22:10.145 "get_zone_info": false, 00:22:10.145 "zone_management": false, 00:22:10.145 "zone_append": false, 00:22:10.145 "compare": false, 00:22:10.145 "compare_and_write": false, 00:22:10.145 "abort": true, 00:22:10.145 "seek_hole": false, 00:22:10.145 "seek_data": false, 00:22:10.145 "copy": true, 00:22:10.145 "nvme_iov_md": false 00:22:10.145 }, 00:22:10.145 "memory_domains": [ 00:22:10.145 { 00:22:10.145 "dma_device_id": "system", 00:22:10.145 "dma_device_type": 1 00:22:10.145 }, 00:22:10.145 { 00:22:10.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.145 "dma_device_type": 2 00:22:10.145 } 00:22:10.145 ], 00:22:10.145 "driver_specific": {} 00:22:10.145 } 00:22:10.145 ] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.145 "name": "Existed_Raid", 00:22:10.145 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:10.145 "strip_size_kb": 64, 00:22:10.145 "state": "configuring", 00:22:10.145 "raid_level": "raid0", 00:22:10.145 "superblock": true, 00:22:10.145 "num_base_bdevs": 4, 00:22:10.145 "num_base_bdevs_discovered": 3, 00:22:10.145 "num_base_bdevs_operational": 4, 00:22:10.145 "base_bdevs_list": [ 00:22:10.145 { 00:22:10.145 "name": "BaseBdev1", 00:22:10.145 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:10.145 "is_configured": true, 00:22:10.145 "data_offset": 2048, 00:22:10.145 "data_size": 63488 00:22:10.145 }, 00:22:10.145 { 00:22:10.145 "name": null, 00:22:10.145 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:10.145 "is_configured": false, 00:22:10.145 "data_offset": 0, 00:22:10.145 "data_size": 63488 00:22:10.145 }, 00:22:10.145 { 00:22:10.145 "name": "BaseBdev3", 00:22:10.145 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:10.145 "is_configured": true, 00:22:10.145 "data_offset": 2048, 00:22:10.145 "data_size": 63488 00:22:10.145 }, 00:22:10.145 { 00:22:10.145 "name": "BaseBdev4", 00:22:10.145 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:10.145 "is_configured": true, 00:22:10.145 "data_offset": 2048, 00:22:10.145 "data_size": 63488 00:22:10.145 } 00:22:10.145 ] 00:22:10.145 }' 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.145 07:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 [2024-10-07 07:43:10.096909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.713 "name": "Existed_Raid", 00:22:10.713 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:10.713 "strip_size_kb": 64, 00:22:10.713 "state": "configuring", 00:22:10.713 "raid_level": "raid0", 00:22:10.713 "superblock": true, 00:22:10.713 "num_base_bdevs": 4, 00:22:10.713 "num_base_bdevs_discovered": 2, 00:22:10.713 "num_base_bdevs_operational": 4, 00:22:10.713 "base_bdevs_list": [ 00:22:10.713 { 00:22:10.713 "name": "BaseBdev1", 00:22:10.713 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:10.713 "is_configured": true, 00:22:10.713 "data_offset": 2048, 00:22:10.713 "data_size": 63488 00:22:10.713 }, 00:22:10.713 { 00:22:10.713 "name": null, 00:22:10.713 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:10.713 "is_configured": false, 00:22:10.713 "data_offset": 0, 00:22:10.713 "data_size": 63488 00:22:10.713 }, 00:22:10.713 { 00:22:10.713 "name": null, 00:22:10.713 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:10.713 "is_configured": false, 00:22:10.713 "data_offset": 0, 00:22:10.713 "data_size": 63488 00:22:10.713 }, 00:22:10.713 { 00:22:10.713 "name": "BaseBdev4", 00:22:10.713 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:10.713 "is_configured": true, 00:22:10.713 "data_offset": 2048, 00:22:10.713 "data_size": 63488 00:22:10.713 } 00:22:10.713 ] 00:22:10.713 }' 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.713 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.279 [2024-10-07 07:43:10.613035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.279 "name": "Existed_Raid", 00:22:11.279 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:11.279 "strip_size_kb": 64, 00:22:11.279 "state": "configuring", 00:22:11.279 "raid_level": "raid0", 00:22:11.279 "superblock": true, 00:22:11.279 "num_base_bdevs": 4, 00:22:11.279 "num_base_bdevs_discovered": 3, 00:22:11.279 "num_base_bdevs_operational": 4, 00:22:11.279 "base_bdevs_list": [ 00:22:11.279 { 00:22:11.279 "name": "BaseBdev1", 00:22:11.279 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:11.279 "is_configured": true, 00:22:11.279 "data_offset": 2048, 00:22:11.279 "data_size": 63488 00:22:11.279 }, 00:22:11.279 { 00:22:11.279 "name": null, 00:22:11.279 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:11.279 "is_configured": false, 00:22:11.279 "data_offset": 0, 00:22:11.279 "data_size": 63488 00:22:11.279 }, 00:22:11.279 { 00:22:11.279 "name": "BaseBdev3", 00:22:11.279 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:11.279 "is_configured": true, 00:22:11.279 "data_offset": 2048, 00:22:11.279 "data_size": 63488 00:22:11.279 }, 00:22:11.279 { 00:22:11.279 "name": "BaseBdev4", 00:22:11.279 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:11.279 "is_configured": true, 00:22:11.279 "data_offset": 2048, 00:22:11.279 "data_size": 63488 00:22:11.279 } 00:22:11.279 ] 00:22:11.279 }' 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.279 07:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.537 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.537 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.537 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:11.537 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.537 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 [2024-10-07 07:43:11.117345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:11.794 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.794 "name": "Existed_Raid", 00:22:11.794 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:11.794 "strip_size_kb": 64, 00:22:11.794 "state": "configuring", 00:22:11.794 "raid_level": "raid0", 00:22:11.794 "superblock": true, 00:22:11.794 "num_base_bdevs": 4, 00:22:11.794 "num_base_bdevs_discovered": 2, 00:22:11.794 "num_base_bdevs_operational": 4, 00:22:11.794 "base_bdevs_list": [ 00:22:11.794 { 00:22:11.794 "name": null, 00:22:11.794 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:11.794 "is_configured": false, 00:22:11.794 "data_offset": 0, 00:22:11.794 "data_size": 63488 00:22:11.794 }, 00:22:11.794 { 00:22:11.794 "name": null, 00:22:11.794 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:11.794 "is_configured": false, 00:22:11.794 "data_offset": 0, 00:22:11.794 "data_size": 63488 00:22:11.794 }, 00:22:11.794 { 00:22:11.794 "name": "BaseBdev3", 00:22:11.795 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:11.795 "is_configured": true, 00:22:11.795 "data_offset": 2048, 00:22:11.795 "data_size": 63488 00:22:11.795 }, 00:22:11.795 { 00:22:11.795 "name": "BaseBdev4", 00:22:11.795 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:11.795 "is_configured": true, 00:22:11.795 "data_offset": 2048, 00:22:11.795 "data_size": 63488 00:22:11.795 } 00:22:11.795 ] 00:22:11.795 }' 00:22:11.795 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.795 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 [2024-10-07 07:43:11.738771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.361 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.361 "name": "Existed_Raid", 00:22:12.361 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:12.361 "strip_size_kb": 64, 00:22:12.361 "state": "configuring", 00:22:12.361 "raid_level": "raid0", 00:22:12.361 "superblock": true, 00:22:12.361 "num_base_bdevs": 4, 00:22:12.361 "num_base_bdevs_discovered": 3, 00:22:12.361 "num_base_bdevs_operational": 4, 00:22:12.361 "base_bdevs_list": [ 00:22:12.361 { 00:22:12.361 "name": null, 00:22:12.361 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:12.361 "is_configured": false, 00:22:12.361 "data_offset": 0, 00:22:12.361 "data_size": 63488 00:22:12.361 }, 00:22:12.361 { 00:22:12.361 "name": "BaseBdev2", 00:22:12.361 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:12.361 "is_configured": true, 00:22:12.361 "data_offset": 2048, 00:22:12.361 "data_size": 63488 00:22:12.361 }, 00:22:12.361 { 00:22:12.361 "name": "BaseBdev3", 00:22:12.361 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:12.361 "is_configured": true, 00:22:12.362 "data_offset": 2048, 00:22:12.362 "data_size": 63488 00:22:12.362 }, 00:22:12.362 { 00:22:12.362 "name": "BaseBdev4", 00:22:12.362 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:12.362 "is_configured": true, 00:22:12.362 "data_offset": 2048, 00:22:12.362 "data_size": 63488 00:22:12.362 } 00:22:12.362 ] 00:22:12.362 }' 00:22:12.362 07:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.362 07:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 NewBaseBdev 00:22:12.928 [2024-10-07 07:43:12.342091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:12.928 [2024-10-07 07:43:12.342462] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:12.928 [2024-10-07 07:43:12.342484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:12.928 [2024-10-07 07:43:12.342907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:12.928 [2024-10-07 07:43:12.343101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:12.928 [2024-10-07 07:43:12.343122] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:12.928 [2024-10-07 07:43:12.343315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.928 [ 00:22:12.928 { 00:22:12.928 "name": "NewBaseBdev", 00:22:12.928 "aliases": [ 00:22:12.928 "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb" 00:22:12.928 ], 00:22:12.928 "product_name": "Malloc disk", 00:22:12.928 "block_size": 512, 00:22:12.928 "num_blocks": 65536, 00:22:12.928 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:12.928 "assigned_rate_limits": { 00:22:12.928 "rw_ios_per_sec": 0, 00:22:12.928 "rw_mbytes_per_sec": 0, 00:22:12.928 "r_mbytes_per_sec": 0, 00:22:12.928 "w_mbytes_per_sec": 0 00:22:12.928 }, 00:22:12.928 "claimed": true, 00:22:12.928 "claim_type": "exclusive_write", 00:22:12.928 "zoned": false, 00:22:12.928 "supported_io_types": { 00:22:12.928 "read": true, 00:22:12.928 "write": true, 00:22:12.928 "unmap": true, 00:22:12.928 "flush": true, 00:22:12.928 "reset": true, 00:22:12.928 "nvme_admin": false, 00:22:12.928 "nvme_io": false, 00:22:12.928 "nvme_io_md": false, 00:22:12.928 "write_zeroes": true, 00:22:12.928 "zcopy": true, 00:22:12.928 "get_zone_info": false, 00:22:12.928 "zone_management": false, 00:22:12.928 "zone_append": false, 00:22:12.928 "compare": false, 00:22:12.928 "compare_and_write": false, 00:22:12.928 "abort": true, 00:22:12.928 "seek_hole": false, 00:22:12.928 "seek_data": false, 00:22:12.928 "copy": true, 00:22:12.928 "nvme_iov_md": false 00:22:12.928 }, 00:22:12.928 "memory_domains": [ 00:22:12.928 { 00:22:12.928 "dma_device_id": "system", 00:22:12.928 "dma_device_type": 1 00:22:12.928 }, 00:22:12.928 { 00:22:12.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.928 "dma_device_type": 2 00:22:12.928 } 00:22:12.928 ], 00:22:12.928 "driver_specific": {} 00:22:12.928 } 00:22:12.928 ] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.928 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.929 "name": "Existed_Raid", 00:22:12.929 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:12.929 "strip_size_kb": 64, 00:22:12.929 "state": "online", 00:22:12.929 "raid_level": "raid0", 00:22:12.929 "superblock": true, 00:22:12.929 "num_base_bdevs": 4, 00:22:12.929 "num_base_bdevs_discovered": 4, 00:22:12.929 "num_base_bdevs_operational": 4, 00:22:12.929 "base_bdevs_list": [ 00:22:12.929 { 00:22:12.929 "name": "NewBaseBdev", 00:22:12.929 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:12.929 "is_configured": true, 00:22:12.929 "data_offset": 2048, 00:22:12.929 "data_size": 63488 00:22:12.929 }, 00:22:12.929 { 00:22:12.929 "name": "BaseBdev2", 00:22:12.929 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:12.929 "is_configured": true, 00:22:12.929 "data_offset": 2048, 00:22:12.929 "data_size": 63488 00:22:12.929 }, 00:22:12.929 { 00:22:12.929 "name": "BaseBdev3", 00:22:12.929 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:12.929 "is_configured": true, 00:22:12.929 "data_offset": 2048, 00:22:12.929 "data_size": 63488 00:22:12.929 }, 00:22:12.929 { 00:22:12.929 "name": "BaseBdev4", 00:22:12.929 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:12.929 "is_configured": true, 00:22:12.929 "data_offset": 2048, 00:22:12.929 "data_size": 63488 00:22:12.929 } 00:22:12.929 ] 00:22:12.929 }' 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.929 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:13.621 [2024-10-07 07:43:12.842646] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.621 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.621 "name": "Existed_Raid", 00:22:13.621 "aliases": [ 00:22:13.621 "66f919f1-a021-4376-9d2a-e984eb3cdc19" 00:22:13.621 ], 00:22:13.621 "product_name": "Raid Volume", 00:22:13.621 "block_size": 512, 00:22:13.621 "num_blocks": 253952, 00:22:13.621 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:13.621 "assigned_rate_limits": { 00:22:13.621 "rw_ios_per_sec": 0, 00:22:13.621 "rw_mbytes_per_sec": 0, 00:22:13.621 "r_mbytes_per_sec": 0, 00:22:13.621 "w_mbytes_per_sec": 0 00:22:13.621 }, 00:22:13.621 "claimed": false, 00:22:13.621 "zoned": false, 00:22:13.621 "supported_io_types": { 00:22:13.621 "read": true, 00:22:13.621 "write": true, 00:22:13.621 "unmap": true, 00:22:13.621 "flush": true, 00:22:13.621 "reset": true, 00:22:13.621 "nvme_admin": false, 00:22:13.621 "nvme_io": false, 00:22:13.621 "nvme_io_md": false, 00:22:13.621 "write_zeroes": true, 00:22:13.621 "zcopy": false, 00:22:13.621 "get_zone_info": false, 00:22:13.621 "zone_management": false, 00:22:13.621 "zone_append": false, 00:22:13.621 "compare": false, 00:22:13.621 "compare_and_write": false, 00:22:13.621 "abort": false, 00:22:13.622 "seek_hole": false, 00:22:13.622 "seek_data": false, 00:22:13.622 "copy": false, 00:22:13.622 "nvme_iov_md": false 00:22:13.622 }, 00:22:13.622 "memory_domains": [ 00:22:13.622 { 00:22:13.622 "dma_device_id": "system", 00:22:13.622 "dma_device_type": 1 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.622 "dma_device_type": 2 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "system", 00:22:13.622 "dma_device_type": 1 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.622 "dma_device_type": 2 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "system", 00:22:13.622 "dma_device_type": 1 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.622 "dma_device_type": 2 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "system", 00:22:13.622 "dma_device_type": 1 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.622 "dma_device_type": 2 00:22:13.622 } 00:22:13.622 ], 00:22:13.622 "driver_specific": { 00:22:13.622 "raid": { 00:22:13.622 "uuid": "66f919f1-a021-4376-9d2a-e984eb3cdc19", 00:22:13.622 "strip_size_kb": 64, 00:22:13.622 "state": "online", 00:22:13.622 "raid_level": "raid0", 00:22:13.622 "superblock": true, 00:22:13.622 "num_base_bdevs": 4, 00:22:13.622 "num_base_bdevs_discovered": 4, 00:22:13.622 "num_base_bdevs_operational": 4, 00:22:13.622 "base_bdevs_list": [ 00:22:13.622 { 00:22:13.622 "name": "NewBaseBdev", 00:22:13.622 "uuid": "dcfa0c1d-dc8d-47f1-b1b2-1e9bcedd4cdb", 00:22:13.622 "is_configured": true, 00:22:13.622 "data_offset": 2048, 00:22:13.622 "data_size": 63488 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "name": "BaseBdev2", 00:22:13.622 "uuid": "4ec2b5de-bccd-4056-b0ff-ac39922e881a", 00:22:13.622 "is_configured": true, 00:22:13.622 "data_offset": 2048, 00:22:13.622 "data_size": 63488 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "name": "BaseBdev3", 00:22:13.622 "uuid": "f5dde530-cc61-4cc2-8d0e-a0488dce34aa", 00:22:13.622 "is_configured": true, 00:22:13.622 "data_offset": 2048, 00:22:13.622 "data_size": 63488 00:22:13.622 }, 00:22:13.622 { 00:22:13.622 "name": "BaseBdev4", 00:22:13.622 "uuid": "c63891a4-2b4d-4f6a-a943-5c8378867ecd", 00:22:13.622 "is_configured": true, 00:22:13.622 "data_offset": 2048, 00:22:13.622 "data_size": 63488 00:22:13.622 } 00:22:13.622 ] 00:22:13.622 } 00:22:13.622 } 00:22:13.622 }' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:13.622 BaseBdev2 00:22:13.622 BaseBdev3 00:22:13.622 BaseBdev4' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.622 07:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.622 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.880 [2024-10-07 07:43:13.186347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:13.880 [2024-10-07 07:43:13.186499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:13.880 [2024-10-07 07:43:13.186713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.880 [2024-10-07 07:43:13.186903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.880 [2024-10-07 07:43:13.187017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70193 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 70193 ']' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 70193 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 70193 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:13.880 killing process with pid 70193 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 70193' 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 70193 00:22:13.880 07:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 70193 00:22:13.880 [2024-10-07 07:43:13.233385] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.138 [2024-10-07 07:43:13.678608] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.044 07:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:16.044 ************************************ 00:22:16.044 END TEST raid_state_function_test_sb 00:22:16.044 ************************************ 00:22:16.044 00:22:16.044 real 0m12.327s 00:22:16.044 user 0m19.435s 00:22:16.044 sys 0m2.270s 00:22:16.044 07:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:16.044 07:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.044 07:43:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:16.044 07:43:15 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:22:16.044 07:43:15 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:16.044 07:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.044 ************************************ 00:22:16.044 START TEST raid_superblock_test 00:22:16.044 ************************************ 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid0 4 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:16.044 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:16.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70870 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70870 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 70870 ']' 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:16.045 07:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.045 [2024-10-07 07:43:15.240347] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:16.045 [2024-10-07 07:43:15.240508] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:22:16.045 [2024-10-07 07:43:15.403244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.303 [2024-10-07 07:43:15.620890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.303 [2024-10-07 07:43:15.849675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.303 [2024-10-07 07:43:15.849728] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.870 malloc1 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.870 [2024-10-07 07:43:16.299984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.870 [2024-10-07 07:43:16.300212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.870 [2024-10-07 07:43:16.300338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:16.870 [2024-10-07 07:43:16.300435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.870 [2024-10-07 07:43:16.303314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.870 [2024-10-07 07:43:16.303480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.870 pt1 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.870 malloc2 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.870 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.871 [2024-10-07 07:43:16.371200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.871 [2024-10-07 07:43:16.371393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.871 [2024-10-07 07:43:16.371546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:16.871 [2024-10-07 07:43:16.371637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.871 [2024-10-07 07:43:16.374427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.871 pt2 00:22:16.871 [2024-10-07 07:43:16.374584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.871 malloc3 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:16.871 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.871 [2024-10-07 07:43:16.426597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:16.871 [2024-10-07 07:43:16.426881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.871 [2024-10-07 07:43:16.427088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:16.871 [2024-10-07 07:43:16.427197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.130 [2024-10-07 07:43:16.430161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.130 [2024-10-07 07:43:16.430337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.130 pt3 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 malloc4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 [2024-10-07 07:43:16.486515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:17.130 [2024-10-07 07:43:16.486735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.130 [2024-10-07 07:43:16.486774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:17.130 [2024-10-07 07:43:16.486787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.130 [2024-10-07 07:43:16.489611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.130 [2024-10-07 07:43:16.489783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:17.130 pt4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 [2024-10-07 07:43:16.498722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.130 [2024-10-07 07:43:16.501311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.130 [2024-10-07 07:43:16.501529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.130 [2024-10-07 07:43:16.501774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:17.130 [2024-10-07 07:43:16.502106] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:17.130 [2024-10-07 07:43:16.502229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:17.130 [2024-10-07 07:43:16.502630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:17.130 [2024-10-07 07:43:16.502990] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:17.130 [2024-10-07 07:43:16.503110] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:17.130 [2024-10-07 07:43:16.503454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.130 "name": "raid_bdev1", 00:22:17.130 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:17.130 "strip_size_kb": 64, 00:22:17.130 "state": "online", 00:22:17.130 "raid_level": "raid0", 00:22:17.130 "superblock": true, 00:22:17.130 "num_base_bdevs": 4, 00:22:17.130 "num_base_bdevs_discovered": 4, 00:22:17.130 "num_base_bdevs_operational": 4, 00:22:17.130 "base_bdevs_list": [ 00:22:17.130 { 00:22:17.130 "name": "pt1", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 }, 00:22:17.130 { 00:22:17.130 "name": "pt2", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 }, 00:22:17.130 { 00:22:17.130 "name": "pt3", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 }, 00:22:17.130 { 00:22:17.130 "name": "pt4", 00:22:17.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:17.130 "is_configured": true, 00:22:17.130 "data_offset": 2048, 00:22:17.130 "data_size": 63488 00:22:17.130 } 00:22:17.130 ] 00:22:17.130 }' 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.130 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.698 07:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:17.698 [2024-10-07 07:43:16.983893] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.698 "name": "raid_bdev1", 00:22:17.698 "aliases": [ 00:22:17.698 "b88b1310-67a9-4c37-8553-5d7480cac667" 00:22:17.698 ], 00:22:17.698 "product_name": "Raid Volume", 00:22:17.698 "block_size": 512, 00:22:17.698 "num_blocks": 253952, 00:22:17.698 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:17.698 "assigned_rate_limits": { 00:22:17.698 "rw_ios_per_sec": 0, 00:22:17.698 "rw_mbytes_per_sec": 0, 00:22:17.698 "r_mbytes_per_sec": 0, 00:22:17.698 "w_mbytes_per_sec": 0 00:22:17.698 }, 00:22:17.698 "claimed": false, 00:22:17.698 "zoned": false, 00:22:17.698 "supported_io_types": { 00:22:17.698 "read": true, 00:22:17.698 "write": true, 00:22:17.698 "unmap": true, 00:22:17.698 "flush": true, 00:22:17.698 "reset": true, 00:22:17.698 "nvme_admin": false, 00:22:17.698 "nvme_io": false, 00:22:17.698 "nvme_io_md": false, 00:22:17.698 "write_zeroes": true, 00:22:17.698 "zcopy": false, 00:22:17.698 "get_zone_info": false, 00:22:17.698 "zone_management": false, 00:22:17.698 "zone_append": false, 00:22:17.698 "compare": false, 00:22:17.698 "compare_and_write": false, 00:22:17.698 "abort": false, 00:22:17.698 "seek_hole": false, 00:22:17.698 "seek_data": false, 00:22:17.698 "copy": false, 00:22:17.698 "nvme_iov_md": false 00:22:17.698 }, 00:22:17.698 "memory_domains": [ 00:22:17.698 { 00:22:17.698 "dma_device_id": "system", 00:22:17.698 "dma_device_type": 1 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.698 "dma_device_type": 2 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "system", 00:22:17.698 "dma_device_type": 1 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.698 "dma_device_type": 2 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "system", 00:22:17.698 "dma_device_type": 1 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.698 "dma_device_type": 2 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "system", 00:22:17.698 "dma_device_type": 1 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.698 "dma_device_type": 2 00:22:17.698 } 00:22:17.698 ], 00:22:17.698 "driver_specific": { 00:22:17.698 "raid": { 00:22:17.698 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:17.698 "strip_size_kb": 64, 00:22:17.698 "state": "online", 00:22:17.698 "raid_level": "raid0", 00:22:17.698 "superblock": true, 00:22:17.698 "num_base_bdevs": 4, 00:22:17.698 "num_base_bdevs_discovered": 4, 00:22:17.698 "num_base_bdevs_operational": 4, 00:22:17.698 "base_bdevs_list": [ 00:22:17.698 { 00:22:17.698 "name": "pt1", 00:22:17.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.698 "is_configured": true, 00:22:17.698 "data_offset": 2048, 00:22:17.698 "data_size": 63488 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "name": "pt2", 00:22:17.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.698 "is_configured": true, 00:22:17.698 "data_offset": 2048, 00:22:17.698 "data_size": 63488 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "name": "pt3", 00:22:17.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.698 "is_configured": true, 00:22:17.698 "data_offset": 2048, 00:22:17.698 "data_size": 63488 00:22:17.698 }, 00:22:17.698 { 00:22:17.698 "name": "pt4", 00:22:17.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:17.698 "is_configured": true, 00:22:17.698 "data_offset": 2048, 00:22:17.698 "data_size": 63488 00:22:17.698 } 00:22:17.698 ] 00:22:17.698 } 00:22:17.698 } 00:22:17.698 }' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:17.698 pt2 00:22:17.698 pt3 00:22:17.698 pt4' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.698 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.699 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:17.992 [2024-10-07 07:43:17.303896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b88b1310-67a9-4c37-8553-5d7480cac667 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b88b1310-67a9-4c37-8553-5d7480cac667 ']' 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 [2024-10-07 07:43:17.347601] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.992 [2024-10-07 07:43:17.347811] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.992 [2024-10-07 07:43:17.348025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.992 [2024-10-07 07:43:17.348110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.992 [2024-10-07 07:43:17.348130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.992 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 [2024-10-07 07:43:17.495653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:17.993 [2024-10-07 07:43:17.498195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:17.993 [2024-10-07 07:43:17.498257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:17.993 [2024-10-07 07:43:17.498297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:17.993 [2024-10-07 07:43:17.498351] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:17.993 [2024-10-07 07:43:17.498416] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:17.993 [2024-10-07 07:43:17.498442] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:17.993 [2024-10-07 07:43:17.498467] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:17.993 [2024-10-07 07:43:17.498487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.993 [2024-10-07 07:43:17.498501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:17.993 request: 00:22:17.993 { 00:22:17.993 "name": "raid_bdev1", 00:22:17.993 "raid_level": "raid0", 00:22:17.993 "base_bdevs": [ 00:22:17.993 "malloc1", 00:22:17.993 "malloc2", 00:22:17.993 "malloc3", 00:22:17.993 "malloc4" 00:22:17.993 ], 00:22:17.993 "strip_size_kb": 64, 00:22:17.993 "superblock": false, 00:22:17.993 "method": "bdev_raid_create", 00:22:17.993 "req_id": 1 00:22:17.993 } 00:22:17.993 Got JSON-RPC error response 00:22:17.993 response: 00:22:17.993 { 00:22:17.993 "code": -17, 00:22:17.993 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:17.993 } 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:17.993 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 [2024-10-07 07:43:17.547624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.993 [2024-10-07 07:43:17.547824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.993 [2024-10-07 07:43:17.547901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:17.993 [2024-10-07 07:43:17.547991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.993 [2024-10-07 07:43:17.550871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.993 [2024-10-07 07:43:17.550921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.993 [2024-10-07 07:43:17.551012] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.993 [2024-10-07 07:43:17.551080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.252 pt1 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.252 "name": "raid_bdev1", 00:22:18.252 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:18.252 "strip_size_kb": 64, 00:22:18.252 "state": "configuring", 00:22:18.252 "raid_level": "raid0", 00:22:18.252 "superblock": true, 00:22:18.252 "num_base_bdevs": 4, 00:22:18.252 "num_base_bdevs_discovered": 1, 00:22:18.252 "num_base_bdevs_operational": 4, 00:22:18.252 "base_bdevs_list": [ 00:22:18.252 { 00:22:18.252 "name": "pt1", 00:22:18.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.252 "is_configured": true, 00:22:18.252 "data_offset": 2048, 00:22:18.252 "data_size": 63488 00:22:18.252 }, 00:22:18.252 { 00:22:18.252 "name": null, 00:22:18.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.252 "is_configured": false, 00:22:18.252 "data_offset": 2048, 00:22:18.252 "data_size": 63488 00:22:18.252 }, 00:22:18.252 { 00:22:18.252 "name": null, 00:22:18.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.252 "is_configured": false, 00:22:18.252 "data_offset": 2048, 00:22:18.252 "data_size": 63488 00:22:18.252 }, 00:22:18.252 { 00:22:18.252 "name": null, 00:22:18.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.252 "is_configured": false, 00:22:18.252 "data_offset": 2048, 00:22:18.252 "data_size": 63488 00:22:18.252 } 00:22:18.252 ] 00:22:18.252 }' 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.252 07:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 [2024-10-07 07:43:18.007748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.511 [2024-10-07 07:43:18.007955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.511 [2024-10-07 07:43:18.008018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:18.511 [2024-10-07 07:43:18.008037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.511 [2024-10-07 07:43:18.008537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.511 [2024-10-07 07:43:18.008569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.511 [2024-10-07 07:43:18.008692] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.511 [2024-10-07 07:43:18.008736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.511 pt2 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 [2024-10-07 07:43:18.015758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.511 "name": "raid_bdev1", 00:22:18.511 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:18.511 "strip_size_kb": 64, 00:22:18.511 "state": "configuring", 00:22:18.511 "raid_level": "raid0", 00:22:18.511 "superblock": true, 00:22:18.511 "num_base_bdevs": 4, 00:22:18.511 "num_base_bdevs_discovered": 1, 00:22:18.511 "num_base_bdevs_operational": 4, 00:22:18.511 "base_bdevs_list": [ 00:22:18.511 { 00:22:18.511 "name": "pt1", 00:22:18.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.511 "is_configured": true, 00:22:18.511 "data_offset": 2048, 00:22:18.511 "data_size": 63488 00:22:18.511 }, 00:22:18.511 { 00:22:18.511 "name": null, 00:22:18.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.511 "is_configured": false, 00:22:18.511 "data_offset": 0, 00:22:18.511 "data_size": 63488 00:22:18.511 }, 00:22:18.511 { 00:22:18.511 "name": null, 00:22:18.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.511 "is_configured": false, 00:22:18.511 "data_offset": 2048, 00:22:18.511 "data_size": 63488 00:22:18.511 }, 00:22:18.511 { 00:22:18.511 "name": null, 00:22:18.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.511 "is_configured": false, 00:22:18.511 "data_offset": 2048, 00:22:18.511 "data_size": 63488 00:22:18.511 } 00:22:18.511 ] 00:22:18.511 }' 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.511 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.077 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:19.077 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.077 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.077 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.077 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.077 [2024-10-07 07:43:18.459889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.077 [2024-10-07 07:43:18.460093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.077 [2024-10-07 07:43:18.460158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:19.077 [2024-10-07 07:43:18.460321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.077 [2024-10-07 07:43:18.460989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.077 [2024-10-07 07:43:18.461122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.077 [2024-10-07 07:43:18.461322] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.077 [2024-10-07 07:43:18.461447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.077 pt2 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.078 [2024-10-07 07:43:18.471844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:19.078 [2024-10-07 07:43:18.472014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.078 [2024-10-07 07:43:18.472079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:19.078 [2024-10-07 07:43:18.472246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.078 [2024-10-07 07:43:18.472803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.078 [2024-10-07 07:43:18.472966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:19.078 [2024-10-07 07:43:18.473149] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:19.078 [2024-10-07 07:43:18.473267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:19.078 pt3 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.078 [2024-10-07 07:43:18.479813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:19.078 [2024-10-07 07:43:18.479968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.078 [2024-10-07 07:43:18.479998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:19.078 [2024-10-07 07:43:18.480011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.078 [2024-10-07 07:43:18.480439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.078 [2024-10-07 07:43:18.480467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:19.078 [2024-10-07 07:43:18.480535] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:19.078 [2024-10-07 07:43:18.480563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:19.078 [2024-10-07 07:43:18.480743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:19.078 [2024-10-07 07:43:18.480754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:19.078 [2024-10-07 07:43:18.481022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:19.078 [2024-10-07 07:43:18.481183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:19.078 [2024-10-07 07:43:18.481198] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:19.078 [2024-10-07 07:43:18.481358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.078 pt4 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.078 "name": "raid_bdev1", 00:22:19.078 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:19.078 "strip_size_kb": 64, 00:22:19.078 "state": "online", 00:22:19.078 "raid_level": "raid0", 00:22:19.078 "superblock": true, 00:22:19.078 "num_base_bdevs": 4, 00:22:19.078 "num_base_bdevs_discovered": 4, 00:22:19.078 "num_base_bdevs_operational": 4, 00:22:19.078 "base_bdevs_list": [ 00:22:19.078 { 00:22:19.078 "name": "pt1", 00:22:19.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.078 "is_configured": true, 00:22:19.078 "data_offset": 2048, 00:22:19.078 "data_size": 63488 00:22:19.078 }, 00:22:19.078 { 00:22:19.078 "name": "pt2", 00:22:19.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.078 "is_configured": true, 00:22:19.078 "data_offset": 2048, 00:22:19.078 "data_size": 63488 00:22:19.078 }, 00:22:19.078 { 00:22:19.078 "name": "pt3", 00:22:19.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.078 "is_configured": true, 00:22:19.078 "data_offset": 2048, 00:22:19.078 "data_size": 63488 00:22:19.078 }, 00:22:19.078 { 00:22:19.078 "name": "pt4", 00:22:19.078 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.078 "is_configured": true, 00:22:19.078 "data_offset": 2048, 00:22:19.078 "data_size": 63488 00:22:19.078 } 00:22:19.078 ] 00:22:19.078 }' 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.078 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.338 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.597 [2024-10-07 07:43:18.900334] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.597 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.597 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.597 "name": "raid_bdev1", 00:22:19.597 "aliases": [ 00:22:19.597 "b88b1310-67a9-4c37-8553-5d7480cac667" 00:22:19.597 ], 00:22:19.597 "product_name": "Raid Volume", 00:22:19.597 "block_size": 512, 00:22:19.597 "num_blocks": 253952, 00:22:19.597 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:19.597 "assigned_rate_limits": { 00:22:19.597 "rw_ios_per_sec": 0, 00:22:19.597 "rw_mbytes_per_sec": 0, 00:22:19.597 "r_mbytes_per_sec": 0, 00:22:19.597 "w_mbytes_per_sec": 0 00:22:19.597 }, 00:22:19.597 "claimed": false, 00:22:19.597 "zoned": false, 00:22:19.597 "supported_io_types": { 00:22:19.597 "read": true, 00:22:19.597 "write": true, 00:22:19.597 "unmap": true, 00:22:19.597 "flush": true, 00:22:19.597 "reset": true, 00:22:19.597 "nvme_admin": false, 00:22:19.597 "nvme_io": false, 00:22:19.597 "nvme_io_md": false, 00:22:19.597 "write_zeroes": true, 00:22:19.597 "zcopy": false, 00:22:19.597 "get_zone_info": false, 00:22:19.597 "zone_management": false, 00:22:19.597 "zone_append": false, 00:22:19.597 "compare": false, 00:22:19.597 "compare_and_write": false, 00:22:19.597 "abort": false, 00:22:19.597 "seek_hole": false, 00:22:19.597 "seek_data": false, 00:22:19.597 "copy": false, 00:22:19.597 "nvme_iov_md": false 00:22:19.597 }, 00:22:19.597 "memory_domains": [ 00:22:19.597 { 00:22:19.597 "dma_device_id": "system", 00:22:19.597 "dma_device_type": 1 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.597 "dma_device_type": 2 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "system", 00:22:19.597 "dma_device_type": 1 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.597 "dma_device_type": 2 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "system", 00:22:19.597 "dma_device_type": 1 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.597 "dma_device_type": 2 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "system", 00:22:19.597 "dma_device_type": 1 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.597 "dma_device_type": 2 00:22:19.597 } 00:22:19.597 ], 00:22:19.597 "driver_specific": { 00:22:19.597 "raid": { 00:22:19.597 "uuid": "b88b1310-67a9-4c37-8553-5d7480cac667", 00:22:19.597 "strip_size_kb": 64, 00:22:19.597 "state": "online", 00:22:19.597 "raid_level": "raid0", 00:22:19.597 "superblock": true, 00:22:19.597 "num_base_bdevs": 4, 00:22:19.597 "num_base_bdevs_discovered": 4, 00:22:19.597 "num_base_bdevs_operational": 4, 00:22:19.597 "base_bdevs_list": [ 00:22:19.597 { 00:22:19.597 "name": "pt1", 00:22:19.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.597 "is_configured": true, 00:22:19.597 "data_offset": 2048, 00:22:19.597 "data_size": 63488 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "name": "pt2", 00:22:19.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.597 "is_configured": true, 00:22:19.597 "data_offset": 2048, 00:22:19.597 "data_size": 63488 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "name": "pt3", 00:22:19.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.597 "is_configured": true, 00:22:19.597 "data_offset": 2048, 00:22:19.597 "data_size": 63488 00:22:19.597 }, 00:22:19.597 { 00:22:19.597 "name": "pt4", 00:22:19.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.597 "is_configured": true, 00:22:19.597 "data_offset": 2048, 00:22:19.597 "data_size": 63488 00:22:19.597 } 00:22:19.597 ] 00:22:19.597 } 00:22:19.597 } 00:22:19.597 }' 00:22:19.597 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.597 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:19.597 pt2 00:22:19.597 pt3 00:22:19.597 pt4' 00:22:19.597 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.598 07:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.598 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:19.857 [2024-10-07 07:43:19.192354] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b88b1310-67a9-4c37-8553-5d7480cac667 '!=' b88b1310-67a9-4c37-8553-5d7480cac667 ']' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70870 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 70870 ']' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 70870 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 70870 00:22:19.857 killing process with pid 70870 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 70870' 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 70870 00:22:19.857 [2024-10-07 07:43:19.272935] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:19.857 [2024-10-07 07:43:19.273031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.857 07:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 70870 00:22:19.857 [2024-10-07 07:43:19.273115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.857 [2024-10-07 07:43:19.273128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:20.424 [2024-10-07 07:43:19.699394] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.800 07:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:21.800 00:22:21.800 real 0m5.870s 00:22:21.800 user 0m8.392s 00:22:21.800 sys 0m0.978s 00:22:21.800 07:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:21.800 07:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.800 ************************************ 00:22:21.800 END TEST raid_superblock_test 00:22:21.800 ************************************ 00:22:21.800 07:43:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:21.800 07:43:21 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:22:21.800 07:43:21 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:21.800 07:43:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.800 ************************************ 00:22:21.800 START TEST raid_read_error_test 00:22:21.800 ************************************ 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 4 read 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:21.800 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k0zsYFZ5D5 00:22:21.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71135 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71135 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 71135 ']' 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:21.801 07:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.801 [2024-10-07 07:43:21.197526] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:21.801 [2024-10-07 07:43:21.197700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:22:22.060 [2024-10-07 07:43:21.378289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.060 [2024-10-07 07:43:21.583101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.320 [2024-10-07 07:43:21.781440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.320 [2024-10-07 07:43:21.781504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.580 BaseBdev1_malloc 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.580 true 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.580 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.839 [2024-10-07 07:43:22.144892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:22.839 [2024-10-07 07:43:22.144951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.839 [2024-10-07 07:43:22.144972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:22.839 [2024-10-07 07:43:22.144986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.839 [2024-10-07 07:43:22.147364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.840 [2024-10-07 07:43:22.147409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:22.840 BaseBdev1 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 BaseBdev2_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 true 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 [2024-10-07 07:43:22.222199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:22.840 [2024-10-07 07:43:22.223253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.840 [2024-10-07 07:43:22.223284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:22.840 [2024-10-07 07:43:22.223300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.840 [2024-10-07 07:43:22.225987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.840 [2024-10-07 07:43:22.226032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:22.840 BaseBdev2 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 BaseBdev3_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 true 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 [2024-10-07 07:43:22.285072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:22.840 [2024-10-07 07:43:22.285264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.840 [2024-10-07 07:43:22.285326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:22.840 [2024-10-07 07:43:22.285346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.840 [2024-10-07 07:43:22.287991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.840 [2024-10-07 07:43:22.288032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:22.840 BaseBdev3 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 BaseBdev4_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 true 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 [2024-10-07 07:43:22.347078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:22.840 [2024-10-07 07:43:22.347136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.840 [2024-10-07 07:43:22.347158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:22.840 [2024-10-07 07:43:22.347174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.840 [2024-10-07 07:43:22.349693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.840 [2024-10-07 07:43:22.349762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:22.840 BaseBdev4 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 [2024-10-07 07:43:22.355159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.840 [2024-10-07 07:43:22.357554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.840 [2024-10-07 07:43:22.357794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:22.840 [2024-10-07 07:43:22.357906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:22.840 [2024-10-07 07:43:22.358228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:22.840 [2024-10-07 07:43:22.358252] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:22.840 [2024-10-07 07:43:22.358539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:22.840 [2024-10-07 07:43:22.358695] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:22.840 [2024-10-07 07:43:22.358728] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:22.840 [2024-10-07 07:43:22.358911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.840 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:23.100 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.100 "name": "raid_bdev1", 00:22:23.100 "uuid": "4ee6f33d-3b13-4d0d-b95f-8c280779e84f", 00:22:23.100 "strip_size_kb": 64, 00:22:23.100 "state": "online", 00:22:23.100 "raid_level": "raid0", 00:22:23.100 "superblock": true, 00:22:23.100 "num_base_bdevs": 4, 00:22:23.100 "num_base_bdevs_discovered": 4, 00:22:23.100 "num_base_bdevs_operational": 4, 00:22:23.100 "base_bdevs_list": [ 00:22:23.100 { 00:22:23.100 "name": "BaseBdev1", 00:22:23.100 "uuid": "fa72adcf-68f7-54ab-baf4-b4bc9628ded5", 00:22:23.100 "is_configured": true, 00:22:23.100 "data_offset": 2048, 00:22:23.100 "data_size": 63488 00:22:23.100 }, 00:22:23.100 { 00:22:23.100 "name": "BaseBdev2", 00:22:23.100 "uuid": "ac1ce43e-ab81-5eb4-9b26-b1cff01faac2", 00:22:23.100 "is_configured": true, 00:22:23.100 "data_offset": 2048, 00:22:23.100 "data_size": 63488 00:22:23.100 }, 00:22:23.100 { 00:22:23.100 "name": "BaseBdev3", 00:22:23.100 "uuid": "9e2296a6-bb8e-5c1f-b472-f024f433efb0", 00:22:23.100 "is_configured": true, 00:22:23.100 "data_offset": 2048, 00:22:23.100 "data_size": 63488 00:22:23.100 }, 00:22:23.100 { 00:22:23.100 "name": "BaseBdev4", 00:22:23.100 "uuid": "490cad21-03e9-561b-8bed-459d1409e612", 00:22:23.100 "is_configured": true, 00:22:23.100 "data_offset": 2048, 00:22:23.100 "data_size": 63488 00:22:23.100 } 00:22:23.100 ] 00:22:23.100 }' 00:22:23.100 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.100 07:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.359 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:23.359 07:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:23.359 [2024-10-07 07:43:22.916806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:24.295 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.296 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.296 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:24.555 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.555 "name": "raid_bdev1", 00:22:24.555 "uuid": "4ee6f33d-3b13-4d0d-b95f-8c280779e84f", 00:22:24.555 "strip_size_kb": 64, 00:22:24.555 "state": "online", 00:22:24.555 "raid_level": "raid0", 00:22:24.555 "superblock": true, 00:22:24.555 "num_base_bdevs": 4, 00:22:24.555 "num_base_bdevs_discovered": 4, 00:22:24.555 "num_base_bdevs_operational": 4, 00:22:24.555 "base_bdevs_list": [ 00:22:24.555 { 00:22:24.555 "name": "BaseBdev1", 00:22:24.555 "uuid": "fa72adcf-68f7-54ab-baf4-b4bc9628ded5", 00:22:24.555 "is_configured": true, 00:22:24.555 "data_offset": 2048, 00:22:24.555 "data_size": 63488 00:22:24.555 }, 00:22:24.555 { 00:22:24.555 "name": "BaseBdev2", 00:22:24.555 "uuid": "ac1ce43e-ab81-5eb4-9b26-b1cff01faac2", 00:22:24.555 "is_configured": true, 00:22:24.555 "data_offset": 2048, 00:22:24.555 "data_size": 63488 00:22:24.555 }, 00:22:24.555 { 00:22:24.555 "name": "BaseBdev3", 00:22:24.555 "uuid": "9e2296a6-bb8e-5c1f-b472-f024f433efb0", 00:22:24.555 "is_configured": true, 00:22:24.555 "data_offset": 2048, 00:22:24.555 "data_size": 63488 00:22:24.555 }, 00:22:24.555 { 00:22:24.555 "name": "BaseBdev4", 00:22:24.555 "uuid": "490cad21-03e9-561b-8bed-459d1409e612", 00:22:24.555 "is_configured": true, 00:22:24.555 "data_offset": 2048, 00:22:24.555 "data_size": 63488 00:22:24.555 } 00:22:24.555 ] 00:22:24.555 }' 00:22:24.555 07:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.555 07:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.814 [2024-10-07 07:43:24.316703] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.814 [2024-10-07 07:43:24.316887] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.814 [2024-10-07 07:43:24.319989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.814 [2024-10-07 07:43:24.320183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.814 [2024-10-07 07:43:24.320276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:22:24.814 "results": [ 00:22:24.814 { 00:22:24.814 "job": "raid_bdev1", 00:22:24.814 "core_mask": "0x1", 00:22:24.814 "workload": "randrw", 00:22:24.814 "percentage": 50, 00:22:24.814 "status": "finished", 00:22:24.814 "queue_depth": 1, 00:22:24.814 "io_size": 131072, 00:22:24.814 "runtime": 1.397756, 00:22:24.814 "iops": 15044.11356488543, 00:22:24.814 "mibps": 1880.5141956106788, 00:22:24.814 "io_failed": 1, 00:22:24.814 "io_timeout": 0, 00:22:24.814 "avg_latency_us": 92.08425951463852, 00:22:24.814 "min_latency_us": 27.30666666666667, 00:22:24.814 "max_latency_us": 1583.7866666666666 00:22:24.814 } 00:22:24.814 ], 00:22:24.814 "core_count": 1 00:22:24.814 } 00:22:24.814 ee all in destruct 00:22:24.814 [2024-10-07 07:43:24.320387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71135 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 71135 ']' 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 71135 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 71135 00:22:24.814 killing process with pid 71135 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 71135' 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 71135 00:22:24.814 [2024-10-07 07:43:24.361439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:24.814 07:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 71135 00:22:25.383 [2024-10-07 07:43:24.707503] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k0zsYFZ5D5 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:26.761 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:26.762 07:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:22:26.762 00:22:26.762 real 0m5.111s 00:22:26.762 user 0m6.029s 00:22:26.762 sys 0m0.635s 00:22:26.762 ************************************ 00:22:26.762 END TEST raid_read_error_test 00:22:26.762 ************************************ 00:22:26.762 07:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:26.762 07:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.762 07:43:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:22:26.762 07:43:26 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:22:26.762 07:43:26 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:26.762 07:43:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.762 ************************************ 00:22:26.762 START TEST raid_write_error_test 00:22:26.762 ************************************ 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid0 4 write 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xFqsverYme 00:22:26.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71286 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71286 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 71286 ']' 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:26.762 07:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.021 [2024-10-07 07:43:26.393015] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:27.021 [2024-10-07 07:43:26.393456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:22:27.021 [2024-10-07 07:43:26.576283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.279 [2024-10-07 07:43:26.792654] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.538 [2024-10-07 07:43:27.009617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.538 [2024-10-07 07:43:27.009925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.796 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.797 BaseBdev1_malloc 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.797 true 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.797 [2024-10-07 07:43:27.340946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:27.797 [2024-10-07 07:43:27.341006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.797 [2024-10-07 07:43:27.341028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:27.797 [2024-10-07 07:43:27.341043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.797 [2024-10-07 07:43:27.343503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.797 BaseBdev1 00:22:27.797 [2024-10-07 07:43:27.343686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:27.797 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 BaseBdev2_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 true 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 [2024-10-07 07:43:27.418562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:28.056 [2024-10-07 07:43:27.418758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.056 [2024-10-07 07:43:27.418819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:28.056 [2024-10-07 07:43:27.418920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.056 [2024-10-07 07:43:27.421416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.056 [2024-10-07 07:43:27.421586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:28.056 BaseBdev2 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 BaseBdev3_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 true 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.056 [2024-10-07 07:43:27.480249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:28.056 [2024-10-07 07:43:27.480440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.056 [2024-10-07 07:43:27.480502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:28.056 [2024-10-07 07:43:27.480608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.056 [2024-10-07 07:43:27.483395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.056 BaseBdev3 00:22:28.056 [2024-10-07 07:43:27.483579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.056 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.057 BaseBdev4_malloc 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.057 true 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.057 [2024-10-07 07:43:27.542083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:28.057 [2024-10-07 07:43:27.542144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.057 [2024-10-07 07:43:27.542164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:28.057 [2024-10-07 07:43:27.542178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.057 [2024-10-07 07:43:27.544696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.057 [2024-10-07 07:43:27.544759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:28.057 BaseBdev4 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.057 [2024-10-07 07:43:27.550167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.057 [2024-10-07 07:43:27.552431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.057 [2024-10-07 07:43:27.552639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.057 [2024-10-07 07:43:27.552766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:28.057 [2024-10-07 07:43:27.553009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:28.057 [2024-10-07 07:43:27.553027] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:28.057 [2024-10-07 07:43:27.553299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:28.057 [2024-10-07 07:43:27.553465] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:28.057 [2024-10-07 07:43:27.553476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:28.057 [2024-10-07 07:43:27.553655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.057 "name": "raid_bdev1", 00:22:28.057 "uuid": "4e8e3a5c-cc59-4f96-88fe-68c2e291b6b5", 00:22:28.057 "strip_size_kb": 64, 00:22:28.057 "state": "online", 00:22:28.057 "raid_level": "raid0", 00:22:28.057 "superblock": true, 00:22:28.057 "num_base_bdevs": 4, 00:22:28.057 "num_base_bdevs_discovered": 4, 00:22:28.057 "num_base_bdevs_operational": 4, 00:22:28.057 "base_bdevs_list": [ 00:22:28.057 { 00:22:28.057 "name": "BaseBdev1", 00:22:28.057 "uuid": "0a9880e6-bb8a-530b-92d2-53734274cf34", 00:22:28.057 "is_configured": true, 00:22:28.057 "data_offset": 2048, 00:22:28.057 "data_size": 63488 00:22:28.057 }, 00:22:28.057 { 00:22:28.057 "name": "BaseBdev2", 00:22:28.057 "uuid": "bbb98b34-102e-540c-8a95-6aa61a5d8868", 00:22:28.057 "is_configured": true, 00:22:28.057 "data_offset": 2048, 00:22:28.057 "data_size": 63488 00:22:28.057 }, 00:22:28.057 { 00:22:28.057 "name": "BaseBdev3", 00:22:28.057 "uuid": "3b4d3d32-f50b-5179-8ecd-c2ddf7e0f135", 00:22:28.057 "is_configured": true, 00:22:28.057 "data_offset": 2048, 00:22:28.057 "data_size": 63488 00:22:28.057 }, 00:22:28.057 { 00:22:28.057 "name": "BaseBdev4", 00:22:28.057 "uuid": "d4106c9b-3c30-5407-98fe-057da0740f27", 00:22:28.057 "is_configured": true, 00:22:28.057 "data_offset": 2048, 00:22:28.057 "data_size": 63488 00:22:28.057 } 00:22:28.057 ] 00:22:28.057 }' 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.057 07:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.625 07:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:28.625 07:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:28.625 [2024-10-07 07:43:28.147650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.566 "name": "raid_bdev1", 00:22:29.566 "uuid": "4e8e3a5c-cc59-4f96-88fe-68c2e291b6b5", 00:22:29.566 "strip_size_kb": 64, 00:22:29.566 "state": "online", 00:22:29.566 "raid_level": "raid0", 00:22:29.566 "superblock": true, 00:22:29.566 "num_base_bdevs": 4, 00:22:29.566 "num_base_bdevs_discovered": 4, 00:22:29.566 "num_base_bdevs_operational": 4, 00:22:29.566 "base_bdevs_list": [ 00:22:29.566 { 00:22:29.566 "name": "BaseBdev1", 00:22:29.566 "uuid": "0a9880e6-bb8a-530b-92d2-53734274cf34", 00:22:29.566 "is_configured": true, 00:22:29.566 "data_offset": 2048, 00:22:29.566 "data_size": 63488 00:22:29.566 }, 00:22:29.566 { 00:22:29.566 "name": "BaseBdev2", 00:22:29.566 "uuid": "bbb98b34-102e-540c-8a95-6aa61a5d8868", 00:22:29.566 "is_configured": true, 00:22:29.566 "data_offset": 2048, 00:22:29.566 "data_size": 63488 00:22:29.566 }, 00:22:29.566 { 00:22:29.566 "name": "BaseBdev3", 00:22:29.566 "uuid": "3b4d3d32-f50b-5179-8ecd-c2ddf7e0f135", 00:22:29.566 "is_configured": true, 00:22:29.566 "data_offset": 2048, 00:22:29.566 "data_size": 63488 00:22:29.566 }, 00:22:29.566 { 00:22:29.566 "name": "BaseBdev4", 00:22:29.566 "uuid": "d4106c9b-3c30-5407-98fe-057da0740f27", 00:22:29.566 "is_configured": true, 00:22:29.566 "data_offset": 2048, 00:22:29.566 "data_size": 63488 00:22:29.566 } 00:22:29.566 ] 00:22:29.566 }' 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.566 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.134 [2024-10-07 07:43:29.471097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.134 [2024-10-07 07:43:29.471284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.134 [2024-10-07 07:43:29.474669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.134 [2024-10-07 07:43:29.474758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.134 [2024-10-07 07:43:29.474811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.134 [2024-10-07 07:43:29.474827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:30.134 { 00:22:30.134 "results": [ 00:22:30.134 { 00:22:30.134 "job": "raid_bdev1", 00:22:30.134 "core_mask": "0x1", 00:22:30.134 "workload": "randrw", 00:22:30.134 "percentage": 50, 00:22:30.134 "status": "finished", 00:22:30.134 "queue_depth": 1, 00:22:30.134 "io_size": 131072, 00:22:30.134 "runtime": 1.321239, 00:22:30.134 "iops": 14849.697897201037, 00:22:30.134 "mibps": 1856.2122371501296, 00:22:30.134 "io_failed": 1, 00:22:30.134 "io_timeout": 0, 00:22:30.134 "avg_latency_us": 93.33825012559431, 00:22:30.134 "min_latency_us": 27.794285714285714, 00:22:30.134 "max_latency_us": 1458.9561904761904 00:22:30.134 } 00:22:30.134 ], 00:22:30.134 "core_count": 1 00:22:30.134 } 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71286 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 71286 ']' 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 71286 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 71286 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:30.134 killing process with pid 71286 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 71286' 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 71286 00:22:30.134 [2024-10-07 07:43:29.520368] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:30.134 07:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 71286 00:22:30.394 [2024-10-07 07:43:29.879827] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xFqsverYme 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:32.297 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:32.298 ************************************ 00:22:32.298 END TEST raid_write_error_test 00:22:32.298 ************************************ 00:22:32.298 07:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:22:32.298 00:22:32.298 real 0m5.111s 00:22:32.298 user 0m5.994s 00:22:32.298 sys 0m0.676s 00:22:32.298 07:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:32.298 07:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.298 07:43:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:32.298 07:43:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:22:32.298 07:43:31 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:22:32.298 07:43:31 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:32.298 07:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.298 ************************************ 00:22:32.298 START TEST raid_state_function_test 00:22:32.298 ************************************ 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 4 false 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71430 00:22:32.298 Process raid pid: 71430 00:22:32.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71430' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71430 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 71430 ']' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:32.298 07:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.298 [2024-10-07 07:43:31.578112] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:32.298 [2024-10-07 07:43:31.578324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.298 [2024-10-07 07:43:31.768517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.557 [2024-10-07 07:43:32.049279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.815 [2024-10-07 07:43:32.307083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.815 [2024-10-07 07:43:32.307345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.383 [2024-10-07 07:43:32.666537] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:33.383 [2024-10-07 07:43:32.666778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:33.383 [2024-10-07 07:43:32.666897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:33.383 [2024-10-07 07:43:32.666961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:33.383 [2024-10-07 07:43:32.667052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:33.383 [2024-10-07 07:43:32.667106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:33.383 [2024-10-07 07:43:32.667205] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:33.383 [2024-10-07 07:43:32.667260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.383 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.384 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.384 "name": "Existed_Raid", 00:22:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.384 "strip_size_kb": 64, 00:22:33.384 "state": "configuring", 00:22:33.384 "raid_level": "concat", 00:22:33.384 "superblock": false, 00:22:33.384 "num_base_bdevs": 4, 00:22:33.384 "num_base_bdevs_discovered": 0, 00:22:33.384 "num_base_bdevs_operational": 4, 00:22:33.384 "base_bdevs_list": [ 00:22:33.384 { 00:22:33.384 "name": "BaseBdev1", 00:22:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.384 "is_configured": false, 00:22:33.384 "data_offset": 0, 00:22:33.384 "data_size": 0 00:22:33.384 }, 00:22:33.384 { 00:22:33.384 "name": "BaseBdev2", 00:22:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.384 "is_configured": false, 00:22:33.384 "data_offset": 0, 00:22:33.384 "data_size": 0 00:22:33.384 }, 00:22:33.384 { 00:22:33.384 "name": "BaseBdev3", 00:22:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.384 "is_configured": false, 00:22:33.384 "data_offset": 0, 00:22:33.384 "data_size": 0 00:22:33.384 }, 00:22:33.384 { 00:22:33.384 "name": "BaseBdev4", 00:22:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.384 "is_configured": false, 00:22:33.384 "data_offset": 0, 00:22:33.384 "data_size": 0 00:22:33.384 } 00:22:33.384 ] 00:22:33.384 }' 00:22:33.384 07:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.384 07:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.643 [2024-10-07 07:43:33.130554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:33.643 [2024-10-07 07:43:33.130605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.643 [2024-10-07 07:43:33.138576] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:33.643 [2024-10-07 07:43:33.138774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:33.643 [2024-10-07 07:43:33.138895] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:33.643 [2024-10-07 07:43:33.138925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:33.643 [2024-10-07 07:43:33.138936] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:33.643 [2024-10-07 07:43:33.138951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:33.643 [2024-10-07 07:43:33.138961] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:33.643 [2024-10-07 07:43:33.138976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.643 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.643 [2024-10-07 07:43:33.201280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.902 BaseBdev1 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 [ 00:22:33.902 { 00:22:33.902 "name": "BaseBdev1", 00:22:33.902 "aliases": [ 00:22:33.902 "c3b7c51a-e717-48ce-b53b-99c70613cbed" 00:22:33.902 ], 00:22:33.902 "product_name": "Malloc disk", 00:22:33.902 "block_size": 512, 00:22:33.902 "num_blocks": 65536, 00:22:33.902 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:33.902 "assigned_rate_limits": { 00:22:33.902 "rw_ios_per_sec": 0, 00:22:33.902 "rw_mbytes_per_sec": 0, 00:22:33.902 "r_mbytes_per_sec": 0, 00:22:33.902 "w_mbytes_per_sec": 0 00:22:33.902 }, 00:22:33.902 "claimed": true, 00:22:33.902 "claim_type": "exclusive_write", 00:22:33.902 "zoned": false, 00:22:33.902 "supported_io_types": { 00:22:33.902 "read": true, 00:22:33.902 "write": true, 00:22:33.902 "unmap": true, 00:22:33.902 "flush": true, 00:22:33.902 "reset": true, 00:22:33.902 "nvme_admin": false, 00:22:33.902 "nvme_io": false, 00:22:33.902 "nvme_io_md": false, 00:22:33.902 "write_zeroes": true, 00:22:33.902 "zcopy": true, 00:22:33.902 "get_zone_info": false, 00:22:33.902 "zone_management": false, 00:22:33.902 "zone_append": false, 00:22:33.902 "compare": false, 00:22:33.902 "compare_and_write": false, 00:22:33.902 "abort": true, 00:22:33.902 "seek_hole": false, 00:22:33.902 "seek_data": false, 00:22:33.902 "copy": true, 00:22:33.902 "nvme_iov_md": false 00:22:33.902 }, 00:22:33.902 "memory_domains": [ 00:22:33.902 { 00:22:33.902 "dma_device_id": "system", 00:22:33.902 "dma_device_type": 1 00:22:33.902 }, 00:22:33.902 { 00:22:33.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.902 "dma_device_type": 2 00:22:33.902 } 00:22:33.902 ], 00:22:33.902 "driver_specific": {} 00:22:33.902 } 00:22:33.902 ] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:33.902 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.902 "name": "Existed_Raid", 00:22:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.902 "strip_size_kb": 64, 00:22:33.902 "state": "configuring", 00:22:33.902 "raid_level": "concat", 00:22:33.902 "superblock": false, 00:22:33.902 "num_base_bdevs": 4, 00:22:33.902 "num_base_bdevs_discovered": 1, 00:22:33.902 "num_base_bdevs_operational": 4, 00:22:33.902 "base_bdevs_list": [ 00:22:33.902 { 00:22:33.902 "name": "BaseBdev1", 00:22:33.902 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:33.902 "is_configured": true, 00:22:33.902 "data_offset": 0, 00:22:33.902 "data_size": 65536 00:22:33.902 }, 00:22:33.902 { 00:22:33.902 "name": "BaseBdev2", 00:22:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.902 "is_configured": false, 00:22:33.902 "data_offset": 0, 00:22:33.902 "data_size": 0 00:22:33.902 }, 00:22:33.902 { 00:22:33.902 "name": "BaseBdev3", 00:22:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.902 "is_configured": false, 00:22:33.902 "data_offset": 0, 00:22:33.903 "data_size": 0 00:22:33.903 }, 00:22:33.903 { 00:22:33.903 "name": "BaseBdev4", 00:22:33.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.903 "is_configured": false, 00:22:33.903 "data_offset": 0, 00:22:33.903 "data_size": 0 00:22:33.903 } 00:22:33.903 ] 00:22:33.903 }' 00:22:33.903 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.903 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.162 [2024-10-07 07:43:33.709468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:34.162 [2024-10-07 07:43:33.709529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.162 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.162 [2024-10-07 07:43:33.717513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.162 [2024-10-07 07:43:33.720025] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:34.162 [2024-10-07 07:43:33.720079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:34.162 [2024-10-07 07:43:33.720095] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:34.162 [2024-10-07 07:43:33.720117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:34.162 [2024-10-07 07:43:33.720130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:34.162 [2024-10-07 07:43:33.720148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.422 "name": "Existed_Raid", 00:22:34.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.422 "strip_size_kb": 64, 00:22:34.422 "state": "configuring", 00:22:34.422 "raid_level": "concat", 00:22:34.422 "superblock": false, 00:22:34.422 "num_base_bdevs": 4, 00:22:34.422 "num_base_bdevs_discovered": 1, 00:22:34.422 "num_base_bdevs_operational": 4, 00:22:34.422 "base_bdevs_list": [ 00:22:34.422 { 00:22:34.422 "name": "BaseBdev1", 00:22:34.422 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:34.422 "is_configured": true, 00:22:34.422 "data_offset": 0, 00:22:34.422 "data_size": 65536 00:22:34.422 }, 00:22:34.422 { 00:22:34.422 "name": "BaseBdev2", 00:22:34.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.422 "is_configured": false, 00:22:34.422 "data_offset": 0, 00:22:34.422 "data_size": 0 00:22:34.422 }, 00:22:34.422 { 00:22:34.422 "name": "BaseBdev3", 00:22:34.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.422 "is_configured": false, 00:22:34.422 "data_offset": 0, 00:22:34.422 "data_size": 0 00:22:34.422 }, 00:22:34.422 { 00:22:34.422 "name": "BaseBdev4", 00:22:34.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.422 "is_configured": false, 00:22:34.422 "data_offset": 0, 00:22:34.422 "data_size": 0 00:22:34.422 } 00:22:34.422 ] 00:22:34.422 }' 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.422 07:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.681 [2024-10-07 07:43:34.172777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.681 BaseBdev2 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.681 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.681 [ 00:22:34.681 { 00:22:34.681 "name": "BaseBdev2", 00:22:34.681 "aliases": [ 00:22:34.681 "6c766331-4a9d-47ae-8242-7df284f3ed8b" 00:22:34.681 ], 00:22:34.681 "product_name": "Malloc disk", 00:22:34.681 "block_size": 512, 00:22:34.681 "num_blocks": 65536, 00:22:34.681 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:34.681 "assigned_rate_limits": { 00:22:34.681 "rw_ios_per_sec": 0, 00:22:34.681 "rw_mbytes_per_sec": 0, 00:22:34.681 "r_mbytes_per_sec": 0, 00:22:34.681 "w_mbytes_per_sec": 0 00:22:34.681 }, 00:22:34.681 "claimed": true, 00:22:34.681 "claim_type": "exclusive_write", 00:22:34.681 "zoned": false, 00:22:34.681 "supported_io_types": { 00:22:34.681 "read": true, 00:22:34.681 "write": true, 00:22:34.681 "unmap": true, 00:22:34.681 "flush": true, 00:22:34.681 "reset": true, 00:22:34.681 "nvme_admin": false, 00:22:34.681 "nvme_io": false, 00:22:34.681 "nvme_io_md": false, 00:22:34.681 "write_zeroes": true, 00:22:34.681 "zcopy": true, 00:22:34.681 "get_zone_info": false, 00:22:34.681 "zone_management": false, 00:22:34.681 "zone_append": false, 00:22:34.681 "compare": false, 00:22:34.682 "compare_and_write": false, 00:22:34.682 "abort": true, 00:22:34.682 "seek_hole": false, 00:22:34.682 "seek_data": false, 00:22:34.682 "copy": true, 00:22:34.682 "nvme_iov_md": false 00:22:34.682 }, 00:22:34.682 "memory_domains": [ 00:22:34.682 { 00:22:34.682 "dma_device_id": "system", 00:22:34.682 "dma_device_type": 1 00:22:34.682 }, 00:22:34.682 { 00:22:34.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.682 "dma_device_type": 2 00:22:34.682 } 00:22:34.682 ], 00:22:34.682 "driver_specific": {} 00:22:34.682 } 00:22:34.682 ] 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.682 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:34.941 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.941 "name": "Existed_Raid", 00:22:34.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.941 "strip_size_kb": 64, 00:22:34.941 "state": "configuring", 00:22:34.941 "raid_level": "concat", 00:22:34.941 "superblock": false, 00:22:34.941 "num_base_bdevs": 4, 00:22:34.941 "num_base_bdevs_discovered": 2, 00:22:34.941 "num_base_bdevs_operational": 4, 00:22:34.941 "base_bdevs_list": [ 00:22:34.941 { 00:22:34.941 "name": "BaseBdev1", 00:22:34.941 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:34.941 "is_configured": true, 00:22:34.941 "data_offset": 0, 00:22:34.941 "data_size": 65536 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev2", 00:22:34.941 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:34.941 "is_configured": true, 00:22:34.941 "data_offset": 0, 00:22:34.941 "data_size": 65536 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev3", 00:22:34.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.941 "is_configured": false, 00:22:34.941 "data_offset": 0, 00:22:34.941 "data_size": 0 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev4", 00:22:34.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.941 "is_configured": false, 00:22:34.941 "data_offset": 0, 00:22:34.941 "data_size": 0 00:22:34.941 } 00:22:34.941 ] 00:22:34.941 }' 00:22:34.941 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.941 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 [2024-10-07 07:43:34.659201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:35.201 BaseBdev3 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 [ 00:22:35.201 { 00:22:35.201 "name": "BaseBdev3", 00:22:35.201 "aliases": [ 00:22:35.201 "960a582f-6903-46e7-9080-84decdf1d572" 00:22:35.201 ], 00:22:35.201 "product_name": "Malloc disk", 00:22:35.201 "block_size": 512, 00:22:35.201 "num_blocks": 65536, 00:22:35.201 "uuid": "960a582f-6903-46e7-9080-84decdf1d572", 00:22:35.201 "assigned_rate_limits": { 00:22:35.201 "rw_ios_per_sec": 0, 00:22:35.201 "rw_mbytes_per_sec": 0, 00:22:35.201 "r_mbytes_per_sec": 0, 00:22:35.201 "w_mbytes_per_sec": 0 00:22:35.201 }, 00:22:35.201 "claimed": true, 00:22:35.201 "claim_type": "exclusive_write", 00:22:35.201 "zoned": false, 00:22:35.201 "supported_io_types": { 00:22:35.201 "read": true, 00:22:35.201 "write": true, 00:22:35.201 "unmap": true, 00:22:35.201 "flush": true, 00:22:35.201 "reset": true, 00:22:35.201 "nvme_admin": false, 00:22:35.201 "nvme_io": false, 00:22:35.201 "nvme_io_md": false, 00:22:35.201 "write_zeroes": true, 00:22:35.201 "zcopy": true, 00:22:35.201 "get_zone_info": false, 00:22:35.201 "zone_management": false, 00:22:35.201 "zone_append": false, 00:22:35.201 "compare": false, 00:22:35.201 "compare_and_write": false, 00:22:35.201 "abort": true, 00:22:35.201 "seek_hole": false, 00:22:35.201 "seek_data": false, 00:22:35.201 "copy": true, 00:22:35.201 "nvme_iov_md": false 00:22:35.201 }, 00:22:35.201 "memory_domains": [ 00:22:35.201 { 00:22:35.201 "dma_device_id": "system", 00:22:35.201 "dma_device_type": 1 00:22:35.201 }, 00:22:35.201 { 00:22:35.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.201 "dma_device_type": 2 00:22:35.201 } 00:22:35.201 ], 00:22:35.201 "driver_specific": {} 00:22:35.201 } 00:22:35.201 ] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.201 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.201 "name": "Existed_Raid", 00:22:35.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.201 "strip_size_kb": 64, 00:22:35.201 "state": "configuring", 00:22:35.201 "raid_level": "concat", 00:22:35.201 "superblock": false, 00:22:35.201 "num_base_bdevs": 4, 00:22:35.201 "num_base_bdevs_discovered": 3, 00:22:35.201 "num_base_bdevs_operational": 4, 00:22:35.201 "base_bdevs_list": [ 00:22:35.201 { 00:22:35.201 "name": "BaseBdev1", 00:22:35.201 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:35.201 "is_configured": true, 00:22:35.201 "data_offset": 0, 00:22:35.201 "data_size": 65536 00:22:35.201 }, 00:22:35.201 { 00:22:35.201 "name": "BaseBdev2", 00:22:35.201 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:35.201 "is_configured": true, 00:22:35.201 "data_offset": 0, 00:22:35.201 "data_size": 65536 00:22:35.201 }, 00:22:35.201 { 00:22:35.201 "name": "BaseBdev3", 00:22:35.201 "uuid": "960a582f-6903-46e7-9080-84decdf1d572", 00:22:35.202 "is_configured": true, 00:22:35.202 "data_offset": 0, 00:22:35.202 "data_size": 65536 00:22:35.202 }, 00:22:35.202 { 00:22:35.202 "name": "BaseBdev4", 00:22:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.202 "is_configured": false, 00:22:35.202 "data_offset": 0, 00:22:35.202 "data_size": 0 00:22:35.202 } 00:22:35.202 ] 00:22:35.202 }' 00:22:35.202 07:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.202 07:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.769 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:35.769 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.770 [2024-10-07 07:43:35.189947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:35.770 [2024-10-07 07:43:35.190009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:35.770 [2024-10-07 07:43:35.190021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:35.770 [2024-10-07 07:43:35.190356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:35.770 [2024-10-07 07:43:35.190541] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:35.770 [2024-10-07 07:43:35.190559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:35.770 BaseBdev4 00:22:35.770 [2024-10-07 07:43:35.190901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.770 [ 00:22:35.770 { 00:22:35.770 "name": "BaseBdev4", 00:22:35.770 "aliases": [ 00:22:35.770 "ae3d577c-bacd-4b4d-b448-263a6883c0df" 00:22:35.770 ], 00:22:35.770 "product_name": "Malloc disk", 00:22:35.770 "block_size": 512, 00:22:35.770 "num_blocks": 65536, 00:22:35.770 "uuid": "ae3d577c-bacd-4b4d-b448-263a6883c0df", 00:22:35.770 "assigned_rate_limits": { 00:22:35.770 "rw_ios_per_sec": 0, 00:22:35.770 "rw_mbytes_per_sec": 0, 00:22:35.770 "r_mbytes_per_sec": 0, 00:22:35.770 "w_mbytes_per_sec": 0 00:22:35.770 }, 00:22:35.770 "claimed": true, 00:22:35.770 "claim_type": "exclusive_write", 00:22:35.770 "zoned": false, 00:22:35.770 "supported_io_types": { 00:22:35.770 "read": true, 00:22:35.770 "write": true, 00:22:35.770 "unmap": true, 00:22:35.770 "flush": true, 00:22:35.770 "reset": true, 00:22:35.770 "nvme_admin": false, 00:22:35.770 "nvme_io": false, 00:22:35.770 "nvme_io_md": false, 00:22:35.770 "write_zeroes": true, 00:22:35.770 "zcopy": true, 00:22:35.770 "get_zone_info": false, 00:22:35.770 "zone_management": false, 00:22:35.770 "zone_append": false, 00:22:35.770 "compare": false, 00:22:35.770 "compare_and_write": false, 00:22:35.770 "abort": true, 00:22:35.770 "seek_hole": false, 00:22:35.770 "seek_data": false, 00:22:35.770 "copy": true, 00:22:35.770 "nvme_iov_md": false 00:22:35.770 }, 00:22:35.770 "memory_domains": [ 00:22:35.770 { 00:22:35.770 "dma_device_id": "system", 00:22:35.770 "dma_device_type": 1 00:22:35.770 }, 00:22:35.770 { 00:22:35.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.770 "dma_device_type": 2 00:22:35.770 } 00:22:35.770 ], 00:22:35.770 "driver_specific": {} 00:22:35.770 } 00:22:35.770 ] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.770 "name": "Existed_Raid", 00:22:35.770 "uuid": "28646ce9-3f80-4133-bb87-a89e3f31f84f", 00:22:35.770 "strip_size_kb": 64, 00:22:35.770 "state": "online", 00:22:35.770 "raid_level": "concat", 00:22:35.770 "superblock": false, 00:22:35.770 "num_base_bdevs": 4, 00:22:35.770 "num_base_bdevs_discovered": 4, 00:22:35.770 "num_base_bdevs_operational": 4, 00:22:35.770 "base_bdevs_list": [ 00:22:35.770 { 00:22:35.770 "name": "BaseBdev1", 00:22:35.770 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:35.770 "is_configured": true, 00:22:35.770 "data_offset": 0, 00:22:35.770 "data_size": 65536 00:22:35.770 }, 00:22:35.770 { 00:22:35.770 "name": "BaseBdev2", 00:22:35.770 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:35.770 "is_configured": true, 00:22:35.770 "data_offset": 0, 00:22:35.770 "data_size": 65536 00:22:35.770 }, 00:22:35.770 { 00:22:35.770 "name": "BaseBdev3", 00:22:35.770 "uuid": "960a582f-6903-46e7-9080-84decdf1d572", 00:22:35.770 "is_configured": true, 00:22:35.770 "data_offset": 0, 00:22:35.770 "data_size": 65536 00:22:35.770 }, 00:22:35.770 { 00:22:35.770 "name": "BaseBdev4", 00:22:35.770 "uuid": "ae3d577c-bacd-4b4d-b448-263a6883c0df", 00:22:35.770 "is_configured": true, 00:22:35.770 "data_offset": 0, 00:22:35.770 "data_size": 65536 00:22:35.770 } 00:22:35.770 ] 00:22:35.770 }' 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.770 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:36.338 [2024-10-07 07:43:35.686507] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.338 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:36.338 "name": "Existed_Raid", 00:22:36.338 "aliases": [ 00:22:36.338 "28646ce9-3f80-4133-bb87-a89e3f31f84f" 00:22:36.338 ], 00:22:36.338 "product_name": "Raid Volume", 00:22:36.338 "block_size": 512, 00:22:36.338 "num_blocks": 262144, 00:22:36.338 "uuid": "28646ce9-3f80-4133-bb87-a89e3f31f84f", 00:22:36.338 "assigned_rate_limits": { 00:22:36.338 "rw_ios_per_sec": 0, 00:22:36.338 "rw_mbytes_per_sec": 0, 00:22:36.338 "r_mbytes_per_sec": 0, 00:22:36.338 "w_mbytes_per_sec": 0 00:22:36.338 }, 00:22:36.338 "claimed": false, 00:22:36.338 "zoned": false, 00:22:36.338 "supported_io_types": { 00:22:36.338 "read": true, 00:22:36.338 "write": true, 00:22:36.338 "unmap": true, 00:22:36.338 "flush": true, 00:22:36.338 "reset": true, 00:22:36.338 "nvme_admin": false, 00:22:36.338 "nvme_io": false, 00:22:36.338 "nvme_io_md": false, 00:22:36.338 "write_zeroes": true, 00:22:36.338 "zcopy": false, 00:22:36.338 "get_zone_info": false, 00:22:36.338 "zone_management": false, 00:22:36.338 "zone_append": false, 00:22:36.338 "compare": false, 00:22:36.338 "compare_and_write": false, 00:22:36.338 "abort": false, 00:22:36.338 "seek_hole": false, 00:22:36.338 "seek_data": false, 00:22:36.338 "copy": false, 00:22:36.338 "nvme_iov_md": false 00:22:36.338 }, 00:22:36.338 "memory_domains": [ 00:22:36.338 { 00:22:36.338 "dma_device_id": "system", 00:22:36.338 "dma_device_type": 1 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.338 "dma_device_type": 2 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "system", 00:22:36.338 "dma_device_type": 1 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.338 "dma_device_type": 2 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "system", 00:22:36.338 "dma_device_type": 1 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.338 "dma_device_type": 2 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "system", 00:22:36.338 "dma_device_type": 1 00:22:36.338 }, 00:22:36.338 { 00:22:36.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.338 "dma_device_type": 2 00:22:36.338 } 00:22:36.338 ], 00:22:36.338 "driver_specific": { 00:22:36.338 "raid": { 00:22:36.338 "uuid": "28646ce9-3f80-4133-bb87-a89e3f31f84f", 00:22:36.338 "strip_size_kb": 64, 00:22:36.338 "state": "online", 00:22:36.338 "raid_level": "concat", 00:22:36.338 "superblock": false, 00:22:36.338 "num_base_bdevs": 4, 00:22:36.338 "num_base_bdevs_discovered": 4, 00:22:36.338 "num_base_bdevs_operational": 4, 00:22:36.338 "base_bdevs_list": [ 00:22:36.338 { 00:22:36.338 "name": "BaseBdev1", 00:22:36.339 "uuid": "c3b7c51a-e717-48ce-b53b-99c70613cbed", 00:22:36.339 "is_configured": true, 00:22:36.339 "data_offset": 0, 00:22:36.339 "data_size": 65536 00:22:36.339 }, 00:22:36.339 { 00:22:36.339 "name": "BaseBdev2", 00:22:36.339 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:36.339 "is_configured": true, 00:22:36.339 "data_offset": 0, 00:22:36.339 "data_size": 65536 00:22:36.339 }, 00:22:36.339 { 00:22:36.339 "name": "BaseBdev3", 00:22:36.339 "uuid": "960a582f-6903-46e7-9080-84decdf1d572", 00:22:36.339 "is_configured": true, 00:22:36.339 "data_offset": 0, 00:22:36.339 "data_size": 65536 00:22:36.339 }, 00:22:36.339 { 00:22:36.339 "name": "BaseBdev4", 00:22:36.339 "uuid": "ae3d577c-bacd-4b4d-b448-263a6883c0df", 00:22:36.339 "is_configured": true, 00:22:36.339 "data_offset": 0, 00:22:36.339 "data_size": 65536 00:22:36.339 } 00:22:36.339 ] 00:22:36.339 } 00:22:36.339 } 00:22:36.339 }' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:36.339 BaseBdev2 00:22:36.339 BaseBdev3 00:22:36.339 BaseBdev4' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.339 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.598 07:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.598 [2024-10-07 07:43:36.022262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:36.598 [2024-10-07 07:43:36.022423] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.598 [2024-10-07 07:43:36.022575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:36.598 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.857 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:36.857 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.857 "name": "Existed_Raid", 00:22:36.857 "uuid": "28646ce9-3f80-4133-bb87-a89e3f31f84f", 00:22:36.857 "strip_size_kb": 64, 00:22:36.857 "state": "offline", 00:22:36.857 "raid_level": "concat", 00:22:36.857 "superblock": false, 00:22:36.857 "num_base_bdevs": 4, 00:22:36.857 "num_base_bdevs_discovered": 3, 00:22:36.857 "num_base_bdevs_operational": 3, 00:22:36.857 "base_bdevs_list": [ 00:22:36.857 { 00:22:36.857 "name": null, 00:22:36.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.857 "is_configured": false, 00:22:36.857 "data_offset": 0, 00:22:36.857 "data_size": 65536 00:22:36.857 }, 00:22:36.857 { 00:22:36.857 "name": "BaseBdev2", 00:22:36.857 "uuid": "6c766331-4a9d-47ae-8242-7df284f3ed8b", 00:22:36.857 "is_configured": true, 00:22:36.857 "data_offset": 0, 00:22:36.857 "data_size": 65536 00:22:36.857 }, 00:22:36.857 { 00:22:36.857 "name": "BaseBdev3", 00:22:36.857 "uuid": "960a582f-6903-46e7-9080-84decdf1d572", 00:22:36.857 "is_configured": true, 00:22:36.857 "data_offset": 0, 00:22:36.857 "data_size": 65536 00:22:36.857 }, 00:22:36.857 { 00:22:36.857 "name": "BaseBdev4", 00:22:36.857 "uuid": "ae3d577c-bacd-4b4d-b448-263a6883c0df", 00:22:36.857 "is_configured": true, 00:22:36.857 "data_offset": 0, 00:22:36.857 "data_size": 65536 00:22:36.857 } 00:22:36.857 ] 00:22:36.857 }' 00:22:36.857 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.857 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.116 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.116 [2024-10-07 07:43:36.615301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.374 [2024-10-07 07:43:36.778355] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.374 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.633 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:37.633 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:37.633 07:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:37.633 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.633 07:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.633 [2024-10-07 07:43:36.972296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:37.633 [2024-10-07 07:43:36.972477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.633 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.893 BaseBdev2 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.893 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.893 [ 00:22:37.893 { 00:22:37.893 "name": "BaseBdev2", 00:22:37.893 "aliases": [ 00:22:37.893 "05060a63-37d8-4a15-a570-e8d76d8eddab" 00:22:37.893 ], 00:22:37.893 "product_name": "Malloc disk", 00:22:37.893 "block_size": 512, 00:22:37.893 "num_blocks": 65536, 00:22:37.893 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:37.893 "assigned_rate_limits": { 00:22:37.893 "rw_ios_per_sec": 0, 00:22:37.893 "rw_mbytes_per_sec": 0, 00:22:37.893 "r_mbytes_per_sec": 0, 00:22:37.893 "w_mbytes_per_sec": 0 00:22:37.893 }, 00:22:37.893 "claimed": false, 00:22:37.894 "zoned": false, 00:22:37.894 "supported_io_types": { 00:22:37.894 "read": true, 00:22:37.894 "write": true, 00:22:37.894 "unmap": true, 00:22:37.894 "flush": true, 00:22:37.894 "reset": true, 00:22:37.894 "nvme_admin": false, 00:22:37.894 "nvme_io": false, 00:22:37.894 "nvme_io_md": false, 00:22:37.894 "write_zeroes": true, 00:22:37.894 "zcopy": true, 00:22:37.894 "get_zone_info": false, 00:22:37.894 "zone_management": false, 00:22:37.894 "zone_append": false, 00:22:37.894 "compare": false, 00:22:37.894 "compare_and_write": false, 00:22:37.894 "abort": true, 00:22:37.894 "seek_hole": false, 00:22:37.894 "seek_data": false, 00:22:37.894 "copy": true, 00:22:37.894 "nvme_iov_md": false 00:22:37.894 }, 00:22:37.894 "memory_domains": [ 00:22:37.894 { 00:22:37.894 "dma_device_id": "system", 00:22:37.894 "dma_device_type": 1 00:22:37.894 }, 00:22:37.894 { 00:22:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.894 "dma_device_type": 2 00:22:37.894 } 00:22:37.894 ], 00:22:37.894 "driver_specific": {} 00:22:37.894 } 00:22:37.894 ] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 BaseBdev3 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 [ 00:22:37.894 { 00:22:37.894 "name": "BaseBdev3", 00:22:37.894 "aliases": [ 00:22:37.894 "640309b0-cf9e-4d1b-9339-ab36593cf5d6" 00:22:37.894 ], 00:22:37.894 "product_name": "Malloc disk", 00:22:37.894 "block_size": 512, 00:22:37.894 "num_blocks": 65536, 00:22:37.894 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:37.894 "assigned_rate_limits": { 00:22:37.894 "rw_ios_per_sec": 0, 00:22:37.894 "rw_mbytes_per_sec": 0, 00:22:37.894 "r_mbytes_per_sec": 0, 00:22:37.894 "w_mbytes_per_sec": 0 00:22:37.894 }, 00:22:37.894 "claimed": false, 00:22:37.894 "zoned": false, 00:22:37.894 "supported_io_types": { 00:22:37.894 "read": true, 00:22:37.894 "write": true, 00:22:37.894 "unmap": true, 00:22:37.894 "flush": true, 00:22:37.894 "reset": true, 00:22:37.894 "nvme_admin": false, 00:22:37.894 "nvme_io": false, 00:22:37.894 "nvme_io_md": false, 00:22:37.894 "write_zeroes": true, 00:22:37.894 "zcopy": true, 00:22:37.894 "get_zone_info": false, 00:22:37.894 "zone_management": false, 00:22:37.894 "zone_append": false, 00:22:37.894 "compare": false, 00:22:37.894 "compare_and_write": false, 00:22:37.894 "abort": true, 00:22:37.894 "seek_hole": false, 00:22:37.894 "seek_data": false, 00:22:37.894 "copy": true, 00:22:37.894 "nvme_iov_md": false 00:22:37.894 }, 00:22:37.894 "memory_domains": [ 00:22:37.894 { 00:22:37.894 "dma_device_id": "system", 00:22:37.894 "dma_device_type": 1 00:22:37.894 }, 00:22:37.894 { 00:22:37.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.894 "dma_device_type": 2 00:22:37.894 } 00:22:37.894 ], 00:22:37.894 "driver_specific": {} 00:22:37.894 } 00:22:37.894 ] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 BaseBdev4 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.894 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.894 [ 00:22:37.894 { 00:22:37.894 "name": "BaseBdev4", 00:22:37.894 "aliases": [ 00:22:37.894 "68104e52-573d-41cb-958a-0dbbc12b9ee3" 00:22:37.894 ], 00:22:37.894 "product_name": "Malloc disk", 00:22:37.894 "block_size": 512, 00:22:37.894 "num_blocks": 65536, 00:22:37.894 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:37.894 "assigned_rate_limits": { 00:22:37.894 "rw_ios_per_sec": 0, 00:22:37.895 "rw_mbytes_per_sec": 0, 00:22:37.895 "r_mbytes_per_sec": 0, 00:22:37.895 "w_mbytes_per_sec": 0 00:22:37.895 }, 00:22:37.895 "claimed": false, 00:22:37.895 "zoned": false, 00:22:37.895 "supported_io_types": { 00:22:37.895 "read": true, 00:22:37.895 "write": true, 00:22:37.895 "unmap": true, 00:22:37.895 "flush": true, 00:22:37.895 "reset": true, 00:22:37.895 "nvme_admin": false, 00:22:37.895 "nvme_io": false, 00:22:37.895 "nvme_io_md": false, 00:22:37.895 "write_zeroes": true, 00:22:37.895 "zcopy": true, 00:22:37.895 "get_zone_info": false, 00:22:37.895 "zone_management": false, 00:22:37.895 "zone_append": false, 00:22:37.895 "compare": false, 00:22:37.895 "compare_and_write": false, 00:22:37.895 "abort": true, 00:22:37.895 "seek_hole": false, 00:22:37.895 "seek_data": false, 00:22:37.895 "copy": true, 00:22:37.895 "nvme_iov_md": false 00:22:37.895 }, 00:22:37.895 "memory_domains": [ 00:22:37.895 { 00:22:37.895 "dma_device_id": "system", 00:22:37.895 "dma_device_type": 1 00:22:37.895 }, 00:22:37.895 { 00:22:37.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.895 "dma_device_type": 2 00:22:37.895 } 00:22:37.895 ], 00:22:37.895 "driver_specific": {} 00:22:37.895 } 00:22:37.895 ] 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.895 [2024-10-07 07:43:37.415476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:37.895 [2024-10-07 07:43:37.415652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:37.895 [2024-10-07 07:43:37.415801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:37.895 [2024-10-07 07:43:37.418326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.895 [2024-10-07 07:43:37.418520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.895 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.154 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.154 "name": "Existed_Raid", 00:22:38.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.154 "strip_size_kb": 64, 00:22:38.154 "state": "configuring", 00:22:38.154 "raid_level": "concat", 00:22:38.154 "superblock": false, 00:22:38.154 "num_base_bdevs": 4, 00:22:38.154 "num_base_bdevs_discovered": 3, 00:22:38.154 "num_base_bdevs_operational": 4, 00:22:38.154 "base_bdevs_list": [ 00:22:38.154 { 00:22:38.154 "name": "BaseBdev1", 00:22:38.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.154 "is_configured": false, 00:22:38.154 "data_offset": 0, 00:22:38.154 "data_size": 0 00:22:38.154 }, 00:22:38.154 { 00:22:38.154 "name": "BaseBdev2", 00:22:38.154 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:38.154 "is_configured": true, 00:22:38.154 "data_offset": 0, 00:22:38.154 "data_size": 65536 00:22:38.154 }, 00:22:38.154 { 00:22:38.154 "name": "BaseBdev3", 00:22:38.154 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:38.154 "is_configured": true, 00:22:38.154 "data_offset": 0, 00:22:38.154 "data_size": 65536 00:22:38.154 }, 00:22:38.154 { 00:22:38.154 "name": "BaseBdev4", 00:22:38.154 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:38.154 "is_configured": true, 00:22:38.154 "data_offset": 0, 00:22:38.154 "data_size": 65536 00:22:38.154 } 00:22:38.154 ] 00:22:38.154 }' 00:22:38.154 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.154 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 [2024-10-07 07:43:37.891620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.413 "name": "Existed_Raid", 00:22:38.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.413 "strip_size_kb": 64, 00:22:38.413 "state": "configuring", 00:22:38.413 "raid_level": "concat", 00:22:38.413 "superblock": false, 00:22:38.413 "num_base_bdevs": 4, 00:22:38.413 "num_base_bdevs_discovered": 2, 00:22:38.413 "num_base_bdevs_operational": 4, 00:22:38.413 "base_bdevs_list": [ 00:22:38.413 { 00:22:38.413 "name": "BaseBdev1", 00:22:38.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.413 "is_configured": false, 00:22:38.413 "data_offset": 0, 00:22:38.413 "data_size": 0 00:22:38.413 }, 00:22:38.413 { 00:22:38.413 "name": null, 00:22:38.413 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:38.413 "is_configured": false, 00:22:38.413 "data_offset": 0, 00:22:38.413 "data_size": 65536 00:22:38.413 }, 00:22:38.413 { 00:22:38.413 "name": "BaseBdev3", 00:22:38.413 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:38.413 "is_configured": true, 00:22:38.413 "data_offset": 0, 00:22:38.413 "data_size": 65536 00:22:38.413 }, 00:22:38.413 { 00:22:38.413 "name": "BaseBdev4", 00:22:38.413 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:38.413 "is_configured": true, 00:22:38.413 "data_offset": 0, 00:22:38.413 "data_size": 65536 00:22:38.413 } 00:22:38.413 ] 00:22:38.413 }' 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.413 07:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 [2024-10-07 07:43:38.484506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.980 BaseBdev1 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 [ 00:22:38.980 { 00:22:38.980 "name": "BaseBdev1", 00:22:38.980 "aliases": [ 00:22:38.980 "4c246654-b341-439e-a11c-ab09bab0291d" 00:22:38.980 ], 00:22:38.980 "product_name": "Malloc disk", 00:22:38.980 "block_size": 512, 00:22:38.980 "num_blocks": 65536, 00:22:38.980 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:38.980 "assigned_rate_limits": { 00:22:38.980 "rw_ios_per_sec": 0, 00:22:38.980 "rw_mbytes_per_sec": 0, 00:22:38.980 "r_mbytes_per_sec": 0, 00:22:38.980 "w_mbytes_per_sec": 0 00:22:38.980 }, 00:22:38.980 "claimed": true, 00:22:38.980 "claim_type": "exclusive_write", 00:22:38.980 "zoned": false, 00:22:38.980 "supported_io_types": { 00:22:38.980 "read": true, 00:22:38.980 "write": true, 00:22:38.980 "unmap": true, 00:22:38.980 "flush": true, 00:22:38.980 "reset": true, 00:22:38.980 "nvme_admin": false, 00:22:38.980 "nvme_io": false, 00:22:38.980 "nvme_io_md": false, 00:22:38.980 "write_zeroes": true, 00:22:38.980 "zcopy": true, 00:22:38.980 "get_zone_info": false, 00:22:38.980 "zone_management": false, 00:22:38.980 "zone_append": false, 00:22:38.980 "compare": false, 00:22:38.980 "compare_and_write": false, 00:22:38.980 "abort": true, 00:22:38.980 "seek_hole": false, 00:22:38.980 "seek_data": false, 00:22:38.980 "copy": true, 00:22:38.980 "nvme_iov_md": false 00:22:38.980 }, 00:22:38.980 "memory_domains": [ 00:22:38.980 { 00:22:38.980 "dma_device_id": "system", 00:22:38.980 "dma_device_type": 1 00:22:38.980 }, 00:22:38.980 { 00:22:38.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.980 "dma_device_type": 2 00:22:38.980 } 00:22:38.980 ], 00:22:38.980 "driver_specific": {} 00:22:38.980 } 00:22:38.980 ] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:38.980 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.239 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:39.239 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.239 "name": "Existed_Raid", 00:22:39.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.239 "strip_size_kb": 64, 00:22:39.239 "state": "configuring", 00:22:39.239 "raid_level": "concat", 00:22:39.239 "superblock": false, 00:22:39.239 "num_base_bdevs": 4, 00:22:39.239 "num_base_bdevs_discovered": 3, 00:22:39.239 "num_base_bdevs_operational": 4, 00:22:39.239 "base_bdevs_list": [ 00:22:39.239 { 00:22:39.239 "name": "BaseBdev1", 00:22:39.239 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:39.239 "is_configured": true, 00:22:39.239 "data_offset": 0, 00:22:39.239 "data_size": 65536 00:22:39.239 }, 00:22:39.239 { 00:22:39.239 "name": null, 00:22:39.239 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:39.239 "is_configured": false, 00:22:39.239 "data_offset": 0, 00:22:39.239 "data_size": 65536 00:22:39.239 }, 00:22:39.239 { 00:22:39.239 "name": "BaseBdev3", 00:22:39.239 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:39.239 "is_configured": true, 00:22:39.239 "data_offset": 0, 00:22:39.239 "data_size": 65536 00:22:39.239 }, 00:22:39.239 { 00:22:39.239 "name": "BaseBdev4", 00:22:39.239 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:39.239 "is_configured": true, 00:22:39.239 "data_offset": 0, 00:22:39.239 "data_size": 65536 00:22:39.239 } 00:22:39.239 ] 00:22:39.239 }' 00:22:39.239 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.239 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.497 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:39.497 07:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.497 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:39.497 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.497 07:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:39.497 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:39.497 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:39.497 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:39.497 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.497 [2024-10-07 07:43:39.040762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:39.498 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.756 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:39.756 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.756 "name": "Existed_Raid", 00:22:39.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.756 "strip_size_kb": 64, 00:22:39.756 "state": "configuring", 00:22:39.756 "raid_level": "concat", 00:22:39.756 "superblock": false, 00:22:39.756 "num_base_bdevs": 4, 00:22:39.756 "num_base_bdevs_discovered": 2, 00:22:39.756 "num_base_bdevs_operational": 4, 00:22:39.756 "base_bdevs_list": [ 00:22:39.756 { 00:22:39.756 "name": "BaseBdev1", 00:22:39.756 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:39.756 "is_configured": true, 00:22:39.756 "data_offset": 0, 00:22:39.756 "data_size": 65536 00:22:39.756 }, 00:22:39.756 { 00:22:39.756 "name": null, 00:22:39.756 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:39.756 "is_configured": false, 00:22:39.756 "data_offset": 0, 00:22:39.756 "data_size": 65536 00:22:39.756 }, 00:22:39.756 { 00:22:39.756 "name": null, 00:22:39.756 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:39.756 "is_configured": false, 00:22:39.756 "data_offset": 0, 00:22:39.756 "data_size": 65536 00:22:39.756 }, 00:22:39.756 { 00:22:39.756 "name": "BaseBdev4", 00:22:39.756 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:39.756 "is_configured": true, 00:22:39.756 "data_offset": 0, 00:22:39.756 "data_size": 65536 00:22:39.756 } 00:22:39.756 ] 00:22:39.756 }' 00:22:39.756 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.756 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.044 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.044 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:40.044 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.044 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.044 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.306 [2024-10-07 07:43:39.588932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:40.306 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.307 "name": "Existed_Raid", 00:22:40.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.307 "strip_size_kb": 64, 00:22:40.307 "state": "configuring", 00:22:40.307 "raid_level": "concat", 00:22:40.307 "superblock": false, 00:22:40.307 "num_base_bdevs": 4, 00:22:40.307 "num_base_bdevs_discovered": 3, 00:22:40.307 "num_base_bdevs_operational": 4, 00:22:40.307 "base_bdevs_list": [ 00:22:40.307 { 00:22:40.307 "name": "BaseBdev1", 00:22:40.307 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:40.307 "is_configured": true, 00:22:40.307 "data_offset": 0, 00:22:40.307 "data_size": 65536 00:22:40.307 }, 00:22:40.307 { 00:22:40.307 "name": null, 00:22:40.307 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:40.307 "is_configured": false, 00:22:40.307 "data_offset": 0, 00:22:40.307 "data_size": 65536 00:22:40.307 }, 00:22:40.307 { 00:22:40.307 "name": "BaseBdev3", 00:22:40.307 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:40.307 "is_configured": true, 00:22:40.307 "data_offset": 0, 00:22:40.307 "data_size": 65536 00:22:40.307 }, 00:22:40.307 { 00:22:40.307 "name": "BaseBdev4", 00:22:40.307 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:40.307 "is_configured": true, 00:22:40.307 "data_offset": 0, 00:22:40.307 "data_size": 65536 00:22:40.307 } 00:22:40.307 ] 00:22:40.307 }' 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.307 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.566 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:40.566 07:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.566 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.566 07:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.566 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.566 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:40.566 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:40.566 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.566 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.566 [2024-10-07 07:43:40.037046] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:40.824 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.824 "name": "Existed_Raid", 00:22:40.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.824 "strip_size_kb": 64, 00:22:40.824 "state": "configuring", 00:22:40.824 "raid_level": "concat", 00:22:40.824 "superblock": false, 00:22:40.824 "num_base_bdevs": 4, 00:22:40.824 "num_base_bdevs_discovered": 2, 00:22:40.824 "num_base_bdevs_operational": 4, 00:22:40.824 "base_bdevs_list": [ 00:22:40.824 { 00:22:40.824 "name": null, 00:22:40.824 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:40.824 "is_configured": false, 00:22:40.824 "data_offset": 0, 00:22:40.825 "data_size": 65536 00:22:40.825 }, 00:22:40.825 { 00:22:40.825 "name": null, 00:22:40.825 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:40.825 "is_configured": false, 00:22:40.825 "data_offset": 0, 00:22:40.825 "data_size": 65536 00:22:40.825 }, 00:22:40.825 { 00:22:40.825 "name": "BaseBdev3", 00:22:40.825 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:40.825 "is_configured": true, 00:22:40.825 "data_offset": 0, 00:22:40.825 "data_size": 65536 00:22:40.825 }, 00:22:40.825 { 00:22:40.825 "name": "BaseBdev4", 00:22:40.825 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:40.825 "is_configured": true, 00:22:40.825 "data_offset": 0, 00:22:40.825 "data_size": 65536 00:22:40.825 } 00:22:40.825 ] 00:22:40.825 }' 00:22:40.825 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.825 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.083 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:41.083 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.083 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.083 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.083 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.341 [2024-10-07 07:43:40.673128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.341 "name": "Existed_Raid", 00:22:41.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.341 "strip_size_kb": 64, 00:22:41.341 "state": "configuring", 00:22:41.341 "raid_level": "concat", 00:22:41.341 "superblock": false, 00:22:41.341 "num_base_bdevs": 4, 00:22:41.341 "num_base_bdevs_discovered": 3, 00:22:41.341 "num_base_bdevs_operational": 4, 00:22:41.341 "base_bdevs_list": [ 00:22:41.341 { 00:22:41.341 "name": null, 00:22:41.341 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:41.341 "is_configured": false, 00:22:41.341 "data_offset": 0, 00:22:41.341 "data_size": 65536 00:22:41.341 }, 00:22:41.341 { 00:22:41.341 "name": "BaseBdev2", 00:22:41.341 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:41.341 "is_configured": true, 00:22:41.341 "data_offset": 0, 00:22:41.341 "data_size": 65536 00:22:41.341 }, 00:22:41.341 { 00:22:41.341 "name": "BaseBdev3", 00:22:41.341 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:41.341 "is_configured": true, 00:22:41.341 "data_offset": 0, 00:22:41.341 "data_size": 65536 00:22:41.341 }, 00:22:41.341 { 00:22:41.341 "name": "BaseBdev4", 00:22:41.341 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:41.341 "is_configured": true, 00:22:41.341 "data_offset": 0, 00:22:41.341 "data_size": 65536 00:22:41.341 } 00:22:41.341 ] 00:22:41.341 }' 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.341 07:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.600 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.600 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:41.600 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.600 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.859 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c246654-b341-439e-a11c-ab09bab0291d 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.860 [2024-10-07 07:43:41.292937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:41.860 [2024-10-07 07:43:41.292997] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:41.860 [2024-10-07 07:43:41.293008] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:41.860 [2024-10-07 07:43:41.293326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:41.860 [2024-10-07 07:43:41.293494] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:41.860 [2024-10-07 07:43:41.293518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:41.860 [2024-10-07 07:43:41.293824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.860 NewBaseBdev 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.860 [ 00:22:41.860 { 00:22:41.860 "name": "NewBaseBdev", 00:22:41.860 "aliases": [ 00:22:41.860 "4c246654-b341-439e-a11c-ab09bab0291d" 00:22:41.860 ], 00:22:41.860 "product_name": "Malloc disk", 00:22:41.860 "block_size": 512, 00:22:41.860 "num_blocks": 65536, 00:22:41.860 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:41.860 "assigned_rate_limits": { 00:22:41.860 "rw_ios_per_sec": 0, 00:22:41.860 "rw_mbytes_per_sec": 0, 00:22:41.860 "r_mbytes_per_sec": 0, 00:22:41.860 "w_mbytes_per_sec": 0 00:22:41.860 }, 00:22:41.860 "claimed": true, 00:22:41.860 "claim_type": "exclusive_write", 00:22:41.860 "zoned": false, 00:22:41.860 "supported_io_types": { 00:22:41.860 "read": true, 00:22:41.860 "write": true, 00:22:41.860 "unmap": true, 00:22:41.860 "flush": true, 00:22:41.860 "reset": true, 00:22:41.860 "nvme_admin": false, 00:22:41.860 "nvme_io": false, 00:22:41.860 "nvme_io_md": false, 00:22:41.860 "write_zeroes": true, 00:22:41.860 "zcopy": true, 00:22:41.860 "get_zone_info": false, 00:22:41.860 "zone_management": false, 00:22:41.860 "zone_append": false, 00:22:41.860 "compare": false, 00:22:41.860 "compare_and_write": false, 00:22:41.860 "abort": true, 00:22:41.860 "seek_hole": false, 00:22:41.860 "seek_data": false, 00:22:41.860 "copy": true, 00:22:41.860 "nvme_iov_md": false 00:22:41.860 }, 00:22:41.860 "memory_domains": [ 00:22:41.860 { 00:22:41.860 "dma_device_id": "system", 00:22:41.860 "dma_device_type": 1 00:22:41.860 }, 00:22:41.860 { 00:22:41.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.860 "dma_device_type": 2 00:22:41.860 } 00:22:41.860 ], 00:22:41.860 "driver_specific": {} 00:22:41.860 } 00:22:41.860 ] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.860 "name": "Existed_Raid", 00:22:41.860 "uuid": "f9d644c3-d07a-45d7-bb0b-b9cab6862eab", 00:22:41.860 "strip_size_kb": 64, 00:22:41.860 "state": "online", 00:22:41.860 "raid_level": "concat", 00:22:41.860 "superblock": false, 00:22:41.860 "num_base_bdevs": 4, 00:22:41.860 "num_base_bdevs_discovered": 4, 00:22:41.860 "num_base_bdevs_operational": 4, 00:22:41.860 "base_bdevs_list": [ 00:22:41.860 { 00:22:41.860 "name": "NewBaseBdev", 00:22:41.860 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:41.860 "is_configured": true, 00:22:41.860 "data_offset": 0, 00:22:41.860 "data_size": 65536 00:22:41.860 }, 00:22:41.860 { 00:22:41.860 "name": "BaseBdev2", 00:22:41.860 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:41.860 "is_configured": true, 00:22:41.860 "data_offset": 0, 00:22:41.860 "data_size": 65536 00:22:41.860 }, 00:22:41.860 { 00:22:41.860 "name": "BaseBdev3", 00:22:41.860 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:41.860 "is_configured": true, 00:22:41.860 "data_offset": 0, 00:22:41.860 "data_size": 65536 00:22:41.860 }, 00:22:41.860 { 00:22:41.860 "name": "BaseBdev4", 00:22:41.860 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:41.860 "is_configured": true, 00:22:41.860 "data_offset": 0, 00:22:41.860 "data_size": 65536 00:22:41.860 } 00:22:41.860 ] 00:22:41.860 }' 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.860 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:42.429 [2024-10-07 07:43:41.825524] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.429 "name": "Existed_Raid", 00:22:42.429 "aliases": [ 00:22:42.429 "f9d644c3-d07a-45d7-bb0b-b9cab6862eab" 00:22:42.429 ], 00:22:42.429 "product_name": "Raid Volume", 00:22:42.429 "block_size": 512, 00:22:42.429 "num_blocks": 262144, 00:22:42.429 "uuid": "f9d644c3-d07a-45d7-bb0b-b9cab6862eab", 00:22:42.429 "assigned_rate_limits": { 00:22:42.429 "rw_ios_per_sec": 0, 00:22:42.429 "rw_mbytes_per_sec": 0, 00:22:42.429 "r_mbytes_per_sec": 0, 00:22:42.429 "w_mbytes_per_sec": 0 00:22:42.429 }, 00:22:42.429 "claimed": false, 00:22:42.429 "zoned": false, 00:22:42.429 "supported_io_types": { 00:22:42.429 "read": true, 00:22:42.429 "write": true, 00:22:42.429 "unmap": true, 00:22:42.429 "flush": true, 00:22:42.429 "reset": true, 00:22:42.429 "nvme_admin": false, 00:22:42.429 "nvme_io": false, 00:22:42.429 "nvme_io_md": false, 00:22:42.429 "write_zeroes": true, 00:22:42.429 "zcopy": false, 00:22:42.429 "get_zone_info": false, 00:22:42.429 "zone_management": false, 00:22:42.429 "zone_append": false, 00:22:42.429 "compare": false, 00:22:42.429 "compare_and_write": false, 00:22:42.429 "abort": false, 00:22:42.429 "seek_hole": false, 00:22:42.429 "seek_data": false, 00:22:42.429 "copy": false, 00:22:42.429 "nvme_iov_md": false 00:22:42.429 }, 00:22:42.429 "memory_domains": [ 00:22:42.429 { 00:22:42.429 "dma_device_id": "system", 00:22:42.429 "dma_device_type": 1 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.429 "dma_device_type": 2 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "system", 00:22:42.429 "dma_device_type": 1 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.429 "dma_device_type": 2 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "system", 00:22:42.429 "dma_device_type": 1 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.429 "dma_device_type": 2 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "system", 00:22:42.429 "dma_device_type": 1 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.429 "dma_device_type": 2 00:22:42.429 } 00:22:42.429 ], 00:22:42.429 "driver_specific": { 00:22:42.429 "raid": { 00:22:42.429 "uuid": "f9d644c3-d07a-45d7-bb0b-b9cab6862eab", 00:22:42.429 "strip_size_kb": 64, 00:22:42.429 "state": "online", 00:22:42.429 "raid_level": "concat", 00:22:42.429 "superblock": false, 00:22:42.429 "num_base_bdevs": 4, 00:22:42.429 "num_base_bdevs_discovered": 4, 00:22:42.429 "num_base_bdevs_operational": 4, 00:22:42.429 "base_bdevs_list": [ 00:22:42.429 { 00:22:42.429 "name": "NewBaseBdev", 00:22:42.429 "uuid": "4c246654-b341-439e-a11c-ab09bab0291d", 00:22:42.429 "is_configured": true, 00:22:42.429 "data_offset": 0, 00:22:42.429 "data_size": 65536 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "name": "BaseBdev2", 00:22:42.429 "uuid": "05060a63-37d8-4a15-a570-e8d76d8eddab", 00:22:42.429 "is_configured": true, 00:22:42.429 "data_offset": 0, 00:22:42.429 "data_size": 65536 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "name": "BaseBdev3", 00:22:42.429 "uuid": "640309b0-cf9e-4d1b-9339-ab36593cf5d6", 00:22:42.429 "is_configured": true, 00:22:42.429 "data_offset": 0, 00:22:42.429 "data_size": 65536 00:22:42.429 }, 00:22:42.429 { 00:22:42.429 "name": "BaseBdev4", 00:22:42.429 "uuid": "68104e52-573d-41cb-958a-0dbbc12b9ee3", 00:22:42.429 "is_configured": true, 00:22:42.429 "data_offset": 0, 00:22:42.429 "data_size": 65536 00:22:42.429 } 00:22:42.429 ] 00:22:42.429 } 00:22:42.429 } 00:22:42.429 }' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:42.429 BaseBdev2 00:22:42.429 BaseBdev3 00:22:42.429 BaseBdev4' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.429 07:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:42.430 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.430 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.430 07:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:42.689 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.689 [2024-10-07 07:43:42.149210] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:42.689 [2024-10-07 07:43:42.149385] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.690 [2024-10-07 07:43:42.149618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.690 [2024-10-07 07:43:42.149750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.690 [2024-10-07 07:43:42.149891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71430 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 71430 ']' 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 71430 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 71430 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:42.690 killing process with pid 71430 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 71430' 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 71430 00:22:42.690 [2024-10-07 07:43:42.191900] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:42.690 07:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 71430 00:22:43.258 [2024-10-07 07:43:42.641293] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:44.660 07:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:44.660 ************************************ 00:22:44.660 END TEST raid_state_function_test 00:22:44.660 ************************************ 00:22:44.660 00:22:44.660 real 0m12.605s 00:22:44.660 user 0m19.973s 00:22:44.660 sys 0m2.223s 00:22:44.660 07:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:44.660 07:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.660 07:43:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:22:44.660 07:43:44 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:22:44.660 07:43:44 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:44.660 07:43:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:44.660 ************************************ 00:22:44.660 START TEST raid_state_function_test_sb 00:22:44.660 ************************************ 00:22:44.660 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test concat 4 true 00:22:44.660 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72114 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:44.661 Process raid pid: 72114 00:22:44.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72114' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72114 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 72114 ']' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:44.661 07:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.661 [2024-10-07 07:43:44.208734] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:44.661 [2024-10-07 07:43:44.209084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.920 [2024-10-07 07:43:44.376920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.179 [2024-10-07 07:43:44.645458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.438 [2024-10-07 07:43:44.874351] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.438 [2024-10-07 07:43:44.874597] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 [2024-10-07 07:43:45.185619] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:45.697 [2024-10-07 07:43:45.185863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:45.697 [2024-10-07 07:43:45.186003] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:45.697 [2024-10-07 07:43:45.186115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:45.697 [2024-10-07 07:43:45.186200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:45.697 [2024-10-07 07:43:45.186324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:45.697 [2024-10-07 07:43:45.186404] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:45.697 [2024-10-07 07:43:45.186458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.697 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.698 "name": "Existed_Raid", 00:22:45.698 "uuid": "00e4f63e-aa3d-44df-9796-00fa70d03dbb", 00:22:45.698 "strip_size_kb": 64, 00:22:45.698 "state": "configuring", 00:22:45.698 "raid_level": "concat", 00:22:45.698 "superblock": true, 00:22:45.698 "num_base_bdevs": 4, 00:22:45.698 "num_base_bdevs_discovered": 0, 00:22:45.698 "num_base_bdevs_operational": 4, 00:22:45.698 "base_bdevs_list": [ 00:22:45.698 { 00:22:45.698 "name": "BaseBdev1", 00:22:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.698 "is_configured": false, 00:22:45.698 "data_offset": 0, 00:22:45.698 "data_size": 0 00:22:45.698 }, 00:22:45.698 { 00:22:45.698 "name": "BaseBdev2", 00:22:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.698 "is_configured": false, 00:22:45.698 "data_offset": 0, 00:22:45.698 "data_size": 0 00:22:45.698 }, 00:22:45.698 { 00:22:45.698 "name": "BaseBdev3", 00:22:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.698 "is_configured": false, 00:22:45.698 "data_offset": 0, 00:22:45.698 "data_size": 0 00:22:45.698 }, 00:22:45.698 { 00:22:45.698 "name": "BaseBdev4", 00:22:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.698 "is_configured": false, 00:22:45.698 "data_offset": 0, 00:22:45.698 "data_size": 0 00:22:45.698 } 00:22:45.698 ] 00:22:45.698 }' 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.698 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 [2024-10-07 07:43:45.617573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.267 [2024-10-07 07:43:45.617774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 [2024-10-07 07:43:45.625618] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.267 [2024-10-07 07:43:45.625668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.267 [2024-10-07 07:43:45.625681] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.267 [2024-10-07 07:43:45.625695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.267 [2024-10-07 07:43:45.625715] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.267 [2024-10-07 07:43:45.625730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.267 [2024-10-07 07:43:45.625739] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:46.267 [2024-10-07 07:43:45.625753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 [2024-10-07 07:43:45.691749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.267 BaseBdev1 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.267 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.267 [ 00:22:46.267 { 00:22:46.267 "name": "BaseBdev1", 00:22:46.267 "aliases": [ 00:22:46.267 "e07b2baf-4806-4223-bbba-019a29e0e470" 00:22:46.267 ], 00:22:46.267 "product_name": "Malloc disk", 00:22:46.267 "block_size": 512, 00:22:46.267 "num_blocks": 65536, 00:22:46.268 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:46.268 "assigned_rate_limits": { 00:22:46.268 "rw_ios_per_sec": 0, 00:22:46.268 "rw_mbytes_per_sec": 0, 00:22:46.268 "r_mbytes_per_sec": 0, 00:22:46.268 "w_mbytes_per_sec": 0 00:22:46.268 }, 00:22:46.268 "claimed": true, 00:22:46.268 "claim_type": "exclusive_write", 00:22:46.268 "zoned": false, 00:22:46.268 "supported_io_types": { 00:22:46.268 "read": true, 00:22:46.268 "write": true, 00:22:46.268 "unmap": true, 00:22:46.268 "flush": true, 00:22:46.268 "reset": true, 00:22:46.268 "nvme_admin": false, 00:22:46.268 "nvme_io": false, 00:22:46.268 "nvme_io_md": false, 00:22:46.268 "write_zeroes": true, 00:22:46.268 "zcopy": true, 00:22:46.268 "get_zone_info": false, 00:22:46.268 "zone_management": false, 00:22:46.268 "zone_append": false, 00:22:46.268 "compare": false, 00:22:46.268 "compare_and_write": false, 00:22:46.268 "abort": true, 00:22:46.268 "seek_hole": false, 00:22:46.268 "seek_data": false, 00:22:46.268 "copy": true, 00:22:46.268 "nvme_iov_md": false 00:22:46.268 }, 00:22:46.268 "memory_domains": [ 00:22:46.268 { 00:22:46.268 "dma_device_id": "system", 00:22:46.268 "dma_device_type": 1 00:22:46.268 }, 00:22:46.268 { 00:22:46.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.268 "dma_device_type": 2 00:22:46.268 } 00:22:46.268 ], 00:22:46.268 "driver_specific": {} 00:22:46.268 } 00:22:46.268 ] 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.268 "name": "Existed_Raid", 00:22:46.268 "uuid": "cd697e30-14bd-40e2-b486-e88a6a2b82ee", 00:22:46.268 "strip_size_kb": 64, 00:22:46.268 "state": "configuring", 00:22:46.268 "raid_level": "concat", 00:22:46.268 "superblock": true, 00:22:46.268 "num_base_bdevs": 4, 00:22:46.268 "num_base_bdevs_discovered": 1, 00:22:46.268 "num_base_bdevs_operational": 4, 00:22:46.268 "base_bdevs_list": [ 00:22:46.268 { 00:22:46.268 "name": "BaseBdev1", 00:22:46.268 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:46.268 "is_configured": true, 00:22:46.268 "data_offset": 2048, 00:22:46.268 "data_size": 63488 00:22:46.268 }, 00:22:46.268 { 00:22:46.268 "name": "BaseBdev2", 00:22:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.268 "is_configured": false, 00:22:46.268 "data_offset": 0, 00:22:46.268 "data_size": 0 00:22:46.268 }, 00:22:46.268 { 00:22:46.268 "name": "BaseBdev3", 00:22:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.268 "is_configured": false, 00:22:46.268 "data_offset": 0, 00:22:46.268 "data_size": 0 00:22:46.268 }, 00:22:46.268 { 00:22:46.268 "name": "BaseBdev4", 00:22:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.268 "is_configured": false, 00:22:46.268 "data_offset": 0, 00:22:46.268 "data_size": 0 00:22:46.268 } 00:22:46.268 ] 00:22:46.268 }' 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.268 07:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.870 [2024-10-07 07:43:46.143913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.870 [2024-10-07 07:43:46.144099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.870 [2024-10-07 07:43:46.155982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.870 [2024-10-07 07:43:46.158466] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.870 [2024-10-07 07:43:46.158634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.870 [2024-10-07 07:43:46.158767] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.870 [2024-10-07 07:43:46.158824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.870 [2024-10-07 07:43:46.159018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:46.870 [2024-10-07 07:43:46.159076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.870 "name": "Existed_Raid", 00:22:46.870 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:46.870 "strip_size_kb": 64, 00:22:46.870 "state": "configuring", 00:22:46.870 "raid_level": "concat", 00:22:46.870 "superblock": true, 00:22:46.870 "num_base_bdevs": 4, 00:22:46.870 "num_base_bdevs_discovered": 1, 00:22:46.870 "num_base_bdevs_operational": 4, 00:22:46.870 "base_bdevs_list": [ 00:22:46.870 { 00:22:46.870 "name": "BaseBdev1", 00:22:46.870 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:46.870 "is_configured": true, 00:22:46.870 "data_offset": 2048, 00:22:46.870 "data_size": 63488 00:22:46.870 }, 00:22:46.870 { 00:22:46.870 "name": "BaseBdev2", 00:22:46.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.870 "is_configured": false, 00:22:46.870 "data_offset": 0, 00:22:46.870 "data_size": 0 00:22:46.870 }, 00:22:46.870 { 00:22:46.870 "name": "BaseBdev3", 00:22:46.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.870 "is_configured": false, 00:22:46.870 "data_offset": 0, 00:22:46.870 "data_size": 0 00:22:46.870 }, 00:22:46.870 { 00:22:46.870 "name": "BaseBdev4", 00:22:46.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.870 "is_configured": false, 00:22:46.870 "data_offset": 0, 00:22:46.870 "data_size": 0 00:22:46.870 } 00:22:46.870 ] 00:22:46.870 }' 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.870 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.129 [2024-10-07 07:43:46.632430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.129 BaseBdev2 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.129 [ 00:22:47.129 { 00:22:47.129 "name": "BaseBdev2", 00:22:47.129 "aliases": [ 00:22:47.129 "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d" 00:22:47.129 ], 00:22:47.129 "product_name": "Malloc disk", 00:22:47.129 "block_size": 512, 00:22:47.129 "num_blocks": 65536, 00:22:47.129 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:47.129 "assigned_rate_limits": { 00:22:47.129 "rw_ios_per_sec": 0, 00:22:47.129 "rw_mbytes_per_sec": 0, 00:22:47.129 "r_mbytes_per_sec": 0, 00:22:47.129 "w_mbytes_per_sec": 0 00:22:47.129 }, 00:22:47.129 "claimed": true, 00:22:47.129 "claim_type": "exclusive_write", 00:22:47.129 "zoned": false, 00:22:47.129 "supported_io_types": { 00:22:47.129 "read": true, 00:22:47.129 "write": true, 00:22:47.129 "unmap": true, 00:22:47.129 "flush": true, 00:22:47.129 "reset": true, 00:22:47.129 "nvme_admin": false, 00:22:47.129 "nvme_io": false, 00:22:47.129 "nvme_io_md": false, 00:22:47.129 "write_zeroes": true, 00:22:47.129 "zcopy": true, 00:22:47.129 "get_zone_info": false, 00:22:47.129 "zone_management": false, 00:22:47.129 "zone_append": false, 00:22:47.129 "compare": false, 00:22:47.129 "compare_and_write": false, 00:22:47.129 "abort": true, 00:22:47.129 "seek_hole": false, 00:22:47.129 "seek_data": false, 00:22:47.129 "copy": true, 00:22:47.129 "nvme_iov_md": false 00:22:47.129 }, 00:22:47.129 "memory_domains": [ 00:22:47.129 { 00:22:47.129 "dma_device_id": "system", 00:22:47.129 "dma_device_type": 1 00:22:47.129 }, 00:22:47.129 { 00:22:47.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.129 "dma_device_type": 2 00:22:47.129 } 00:22:47.129 ], 00:22:47.129 "driver_specific": {} 00:22:47.129 } 00:22:47.129 ] 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.129 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.388 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.388 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.388 "name": "Existed_Raid", 00:22:47.388 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:47.388 "strip_size_kb": 64, 00:22:47.388 "state": "configuring", 00:22:47.388 "raid_level": "concat", 00:22:47.388 "superblock": true, 00:22:47.388 "num_base_bdevs": 4, 00:22:47.388 "num_base_bdevs_discovered": 2, 00:22:47.388 "num_base_bdevs_operational": 4, 00:22:47.388 "base_bdevs_list": [ 00:22:47.388 { 00:22:47.388 "name": "BaseBdev1", 00:22:47.388 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:47.388 "is_configured": true, 00:22:47.388 "data_offset": 2048, 00:22:47.388 "data_size": 63488 00:22:47.388 }, 00:22:47.388 { 00:22:47.388 "name": "BaseBdev2", 00:22:47.388 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:47.388 "is_configured": true, 00:22:47.388 "data_offset": 2048, 00:22:47.388 "data_size": 63488 00:22:47.388 }, 00:22:47.388 { 00:22:47.388 "name": "BaseBdev3", 00:22:47.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.388 "is_configured": false, 00:22:47.388 "data_offset": 0, 00:22:47.388 "data_size": 0 00:22:47.388 }, 00:22:47.388 { 00:22:47.388 "name": "BaseBdev4", 00:22:47.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.388 "is_configured": false, 00:22:47.388 "data_offset": 0, 00:22:47.388 "data_size": 0 00:22:47.388 } 00:22:47.388 ] 00:22:47.388 }' 00:22:47.388 07:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.388 07:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.647 [2024-10-07 07:43:47.189376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.647 BaseBdev3 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.647 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.648 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.648 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:47.648 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.648 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.648 [ 00:22:47.648 { 00:22:47.905 "name": "BaseBdev3", 00:22:47.905 "aliases": [ 00:22:47.905 "f792bce0-1b36-481b-9849-133acc16f302" 00:22:47.905 ], 00:22:47.905 "product_name": "Malloc disk", 00:22:47.905 "block_size": 512, 00:22:47.905 "num_blocks": 65536, 00:22:47.905 "uuid": "f792bce0-1b36-481b-9849-133acc16f302", 00:22:47.905 "assigned_rate_limits": { 00:22:47.906 "rw_ios_per_sec": 0, 00:22:47.906 "rw_mbytes_per_sec": 0, 00:22:47.906 "r_mbytes_per_sec": 0, 00:22:47.906 "w_mbytes_per_sec": 0 00:22:47.906 }, 00:22:47.906 "claimed": true, 00:22:47.906 "claim_type": "exclusive_write", 00:22:47.906 "zoned": false, 00:22:47.906 "supported_io_types": { 00:22:47.906 "read": true, 00:22:47.906 "write": true, 00:22:47.906 "unmap": true, 00:22:47.906 "flush": true, 00:22:47.906 "reset": true, 00:22:47.906 "nvme_admin": false, 00:22:47.906 "nvme_io": false, 00:22:47.906 "nvme_io_md": false, 00:22:47.906 "write_zeroes": true, 00:22:47.906 "zcopy": true, 00:22:47.906 "get_zone_info": false, 00:22:47.906 "zone_management": false, 00:22:47.906 "zone_append": false, 00:22:47.906 "compare": false, 00:22:47.906 "compare_and_write": false, 00:22:47.906 "abort": true, 00:22:47.906 "seek_hole": false, 00:22:47.906 "seek_data": false, 00:22:47.906 "copy": true, 00:22:47.906 "nvme_iov_md": false 00:22:47.906 }, 00:22:47.906 "memory_domains": [ 00:22:47.906 { 00:22:47.906 "dma_device_id": "system", 00:22:47.906 "dma_device_type": 1 00:22:47.906 }, 00:22:47.906 { 00:22:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.906 "dma_device_type": 2 00:22:47.906 } 00:22:47.906 ], 00:22:47.906 "driver_specific": {} 00:22:47.906 } 00:22:47.906 ] 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.906 "name": "Existed_Raid", 00:22:47.906 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:47.906 "strip_size_kb": 64, 00:22:47.906 "state": "configuring", 00:22:47.906 "raid_level": "concat", 00:22:47.906 "superblock": true, 00:22:47.906 "num_base_bdevs": 4, 00:22:47.906 "num_base_bdevs_discovered": 3, 00:22:47.906 "num_base_bdevs_operational": 4, 00:22:47.906 "base_bdevs_list": [ 00:22:47.906 { 00:22:47.906 "name": "BaseBdev1", 00:22:47.906 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:47.906 "is_configured": true, 00:22:47.906 "data_offset": 2048, 00:22:47.906 "data_size": 63488 00:22:47.906 }, 00:22:47.906 { 00:22:47.906 "name": "BaseBdev2", 00:22:47.906 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:47.906 "is_configured": true, 00:22:47.906 "data_offset": 2048, 00:22:47.906 "data_size": 63488 00:22:47.906 }, 00:22:47.906 { 00:22:47.906 "name": "BaseBdev3", 00:22:47.906 "uuid": "f792bce0-1b36-481b-9849-133acc16f302", 00:22:47.906 "is_configured": true, 00:22:47.906 "data_offset": 2048, 00:22:47.906 "data_size": 63488 00:22:47.906 }, 00:22:47.906 { 00:22:47.906 "name": "BaseBdev4", 00:22:47.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.906 "is_configured": false, 00:22:47.906 "data_offset": 0, 00:22:47.906 "data_size": 0 00:22:47.906 } 00:22:47.906 ] 00:22:47.906 }' 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.906 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.165 [2024-10-07 07:43:47.715212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:48.165 BaseBdev4 00:22:48.165 [2024-10-07 07:43:47.715811] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:48.165 [2024-10-07 07:43:47.715841] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:48.165 [2024-10-07 07:43:47.716173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:48.165 [2024-10-07 07:43:47.716342] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:48.165 [2024-10-07 07:43:47.716360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:48.165 [2024-10-07 07:43:47.716535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.165 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.424 [ 00:22:48.424 { 00:22:48.424 "name": "BaseBdev4", 00:22:48.424 "aliases": [ 00:22:48.424 "0ed6fe52-62eb-45e3-ac39-024326aaa015" 00:22:48.424 ], 00:22:48.424 "product_name": "Malloc disk", 00:22:48.424 "block_size": 512, 00:22:48.424 "num_blocks": 65536, 00:22:48.424 "uuid": "0ed6fe52-62eb-45e3-ac39-024326aaa015", 00:22:48.424 "assigned_rate_limits": { 00:22:48.424 "rw_ios_per_sec": 0, 00:22:48.424 "rw_mbytes_per_sec": 0, 00:22:48.424 "r_mbytes_per_sec": 0, 00:22:48.424 "w_mbytes_per_sec": 0 00:22:48.424 }, 00:22:48.424 "claimed": true, 00:22:48.424 "claim_type": "exclusive_write", 00:22:48.424 "zoned": false, 00:22:48.424 "supported_io_types": { 00:22:48.424 "read": true, 00:22:48.424 "write": true, 00:22:48.424 "unmap": true, 00:22:48.424 "flush": true, 00:22:48.424 "reset": true, 00:22:48.424 "nvme_admin": false, 00:22:48.424 "nvme_io": false, 00:22:48.424 "nvme_io_md": false, 00:22:48.424 "write_zeroes": true, 00:22:48.424 "zcopy": true, 00:22:48.424 "get_zone_info": false, 00:22:48.424 "zone_management": false, 00:22:48.424 "zone_append": false, 00:22:48.424 "compare": false, 00:22:48.424 "compare_and_write": false, 00:22:48.424 "abort": true, 00:22:48.424 "seek_hole": false, 00:22:48.424 "seek_data": false, 00:22:48.424 "copy": true, 00:22:48.424 "nvme_iov_md": false 00:22:48.424 }, 00:22:48.424 "memory_domains": [ 00:22:48.424 { 00:22:48.424 "dma_device_id": "system", 00:22:48.424 "dma_device_type": 1 00:22:48.424 }, 00:22:48.424 { 00:22:48.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.424 "dma_device_type": 2 00:22:48.424 } 00:22:48.424 ], 00:22:48.424 "driver_specific": {} 00:22:48.424 } 00:22:48.424 ] 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.424 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.424 "name": "Existed_Raid", 00:22:48.424 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:48.424 "strip_size_kb": 64, 00:22:48.424 "state": "online", 00:22:48.424 "raid_level": "concat", 00:22:48.424 "superblock": true, 00:22:48.424 "num_base_bdevs": 4, 00:22:48.424 "num_base_bdevs_discovered": 4, 00:22:48.424 "num_base_bdevs_operational": 4, 00:22:48.424 "base_bdevs_list": [ 00:22:48.424 { 00:22:48.424 "name": "BaseBdev1", 00:22:48.424 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:48.424 "is_configured": true, 00:22:48.424 "data_offset": 2048, 00:22:48.424 "data_size": 63488 00:22:48.424 }, 00:22:48.424 { 00:22:48.424 "name": "BaseBdev2", 00:22:48.424 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:48.424 "is_configured": true, 00:22:48.424 "data_offset": 2048, 00:22:48.425 "data_size": 63488 00:22:48.425 }, 00:22:48.425 { 00:22:48.425 "name": "BaseBdev3", 00:22:48.425 "uuid": "f792bce0-1b36-481b-9849-133acc16f302", 00:22:48.425 "is_configured": true, 00:22:48.425 "data_offset": 2048, 00:22:48.425 "data_size": 63488 00:22:48.425 }, 00:22:48.425 { 00:22:48.425 "name": "BaseBdev4", 00:22:48.425 "uuid": "0ed6fe52-62eb-45e3-ac39-024326aaa015", 00:22:48.425 "is_configured": true, 00:22:48.425 "data_offset": 2048, 00:22:48.425 "data_size": 63488 00:22:48.425 } 00:22:48.425 ] 00:22:48.425 }' 00:22:48.425 07:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.425 07:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.684 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.684 [2024-10-07 07:43:48.240017] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.942 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.942 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.942 "name": "Existed_Raid", 00:22:48.942 "aliases": [ 00:22:48.942 "81f90116-0a97-45ad-bc24-bbb6f62dc100" 00:22:48.942 ], 00:22:48.942 "product_name": "Raid Volume", 00:22:48.942 "block_size": 512, 00:22:48.942 "num_blocks": 253952, 00:22:48.943 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:48.943 "assigned_rate_limits": { 00:22:48.943 "rw_ios_per_sec": 0, 00:22:48.943 "rw_mbytes_per_sec": 0, 00:22:48.943 "r_mbytes_per_sec": 0, 00:22:48.943 "w_mbytes_per_sec": 0 00:22:48.943 }, 00:22:48.943 "claimed": false, 00:22:48.943 "zoned": false, 00:22:48.943 "supported_io_types": { 00:22:48.943 "read": true, 00:22:48.943 "write": true, 00:22:48.943 "unmap": true, 00:22:48.943 "flush": true, 00:22:48.943 "reset": true, 00:22:48.943 "nvme_admin": false, 00:22:48.943 "nvme_io": false, 00:22:48.943 "nvme_io_md": false, 00:22:48.943 "write_zeroes": true, 00:22:48.943 "zcopy": false, 00:22:48.943 "get_zone_info": false, 00:22:48.943 "zone_management": false, 00:22:48.943 "zone_append": false, 00:22:48.943 "compare": false, 00:22:48.943 "compare_and_write": false, 00:22:48.943 "abort": false, 00:22:48.943 "seek_hole": false, 00:22:48.943 "seek_data": false, 00:22:48.943 "copy": false, 00:22:48.943 "nvme_iov_md": false 00:22:48.943 }, 00:22:48.943 "memory_domains": [ 00:22:48.943 { 00:22:48.943 "dma_device_id": "system", 00:22:48.943 "dma_device_type": 1 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.943 "dma_device_type": 2 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "system", 00:22:48.943 "dma_device_type": 1 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.943 "dma_device_type": 2 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "system", 00:22:48.943 "dma_device_type": 1 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.943 "dma_device_type": 2 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "system", 00:22:48.943 "dma_device_type": 1 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.943 "dma_device_type": 2 00:22:48.943 } 00:22:48.943 ], 00:22:48.943 "driver_specific": { 00:22:48.943 "raid": { 00:22:48.943 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:48.943 "strip_size_kb": 64, 00:22:48.943 "state": "online", 00:22:48.943 "raid_level": "concat", 00:22:48.943 "superblock": true, 00:22:48.943 "num_base_bdevs": 4, 00:22:48.943 "num_base_bdevs_discovered": 4, 00:22:48.943 "num_base_bdevs_operational": 4, 00:22:48.943 "base_bdevs_list": [ 00:22:48.943 { 00:22:48.943 "name": "BaseBdev1", 00:22:48.943 "uuid": "e07b2baf-4806-4223-bbba-019a29e0e470", 00:22:48.943 "is_configured": true, 00:22:48.943 "data_offset": 2048, 00:22:48.943 "data_size": 63488 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "name": "BaseBdev2", 00:22:48.943 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:48.943 "is_configured": true, 00:22:48.943 "data_offset": 2048, 00:22:48.943 "data_size": 63488 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "name": "BaseBdev3", 00:22:48.943 "uuid": "f792bce0-1b36-481b-9849-133acc16f302", 00:22:48.943 "is_configured": true, 00:22:48.943 "data_offset": 2048, 00:22:48.943 "data_size": 63488 00:22:48.943 }, 00:22:48.943 { 00:22:48.943 "name": "BaseBdev4", 00:22:48.943 "uuid": "0ed6fe52-62eb-45e3-ac39-024326aaa015", 00:22:48.943 "is_configured": true, 00:22:48.943 "data_offset": 2048, 00:22:48.943 "data_size": 63488 00:22:48.943 } 00:22:48.943 ] 00:22:48.943 } 00:22:48.943 } 00:22:48.943 }' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:48.943 BaseBdev2 00:22:48.943 BaseBdev3 00:22:48.943 BaseBdev4' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.943 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.201 [2024-10-07 07:43:48.547595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.201 [2024-10-07 07:43:48.547793] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.201 [2024-10-07 07:43:48.547965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.201 "name": "Existed_Raid", 00:22:49.201 "uuid": "81f90116-0a97-45ad-bc24-bbb6f62dc100", 00:22:49.201 "strip_size_kb": 64, 00:22:49.201 "state": "offline", 00:22:49.201 "raid_level": "concat", 00:22:49.201 "superblock": true, 00:22:49.201 "num_base_bdevs": 4, 00:22:49.201 "num_base_bdevs_discovered": 3, 00:22:49.201 "num_base_bdevs_operational": 3, 00:22:49.201 "base_bdevs_list": [ 00:22:49.201 { 00:22:49.201 "name": null, 00:22:49.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.201 "is_configured": false, 00:22:49.201 "data_offset": 0, 00:22:49.201 "data_size": 63488 00:22:49.201 }, 00:22:49.201 { 00:22:49.201 "name": "BaseBdev2", 00:22:49.201 "uuid": "a2ac6203-e632-4bfb-b2c9-3d2c6cb58a9d", 00:22:49.201 "is_configured": true, 00:22:49.201 "data_offset": 2048, 00:22:49.201 "data_size": 63488 00:22:49.201 }, 00:22:49.201 { 00:22:49.201 "name": "BaseBdev3", 00:22:49.201 "uuid": "f792bce0-1b36-481b-9849-133acc16f302", 00:22:49.201 "is_configured": true, 00:22:49.201 "data_offset": 2048, 00:22:49.201 "data_size": 63488 00:22:49.201 }, 00:22:49.201 { 00:22:49.201 "name": "BaseBdev4", 00:22:49.201 "uuid": "0ed6fe52-62eb-45e3-ac39-024326aaa015", 00:22:49.201 "is_configured": true, 00:22:49.201 "data_offset": 2048, 00:22:49.201 "data_size": 63488 00:22:49.201 } 00:22:49.201 ] 00:22:49.201 }' 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.201 07:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.769 [2024-10-07 07:43:49.197861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:49.769 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.028 [2024-10-07 07:43:49.346159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.028 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.028 [2024-10-07 07:43:49.524168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:50.028 [2024-10-07 07:43:49.524375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 BaseBdev2 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 [ 00:22:50.288 { 00:22:50.288 "name": "BaseBdev2", 00:22:50.288 "aliases": [ 00:22:50.288 "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d" 00:22:50.288 ], 00:22:50.288 "product_name": "Malloc disk", 00:22:50.288 "block_size": 512, 00:22:50.288 "num_blocks": 65536, 00:22:50.288 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:50.288 "assigned_rate_limits": { 00:22:50.288 "rw_ios_per_sec": 0, 00:22:50.288 "rw_mbytes_per_sec": 0, 00:22:50.288 "r_mbytes_per_sec": 0, 00:22:50.288 "w_mbytes_per_sec": 0 00:22:50.288 }, 00:22:50.288 "claimed": false, 00:22:50.288 "zoned": false, 00:22:50.288 "supported_io_types": { 00:22:50.288 "read": true, 00:22:50.288 "write": true, 00:22:50.288 "unmap": true, 00:22:50.288 "flush": true, 00:22:50.288 "reset": true, 00:22:50.288 "nvme_admin": false, 00:22:50.288 "nvme_io": false, 00:22:50.288 "nvme_io_md": false, 00:22:50.288 "write_zeroes": true, 00:22:50.288 "zcopy": true, 00:22:50.288 "get_zone_info": false, 00:22:50.288 "zone_management": false, 00:22:50.288 "zone_append": false, 00:22:50.288 "compare": false, 00:22:50.288 "compare_and_write": false, 00:22:50.288 "abort": true, 00:22:50.288 "seek_hole": false, 00:22:50.288 "seek_data": false, 00:22:50.288 "copy": true, 00:22:50.288 "nvme_iov_md": false 00:22:50.288 }, 00:22:50.288 "memory_domains": [ 00:22:50.288 { 00:22:50.288 "dma_device_id": "system", 00:22:50.288 "dma_device_type": 1 00:22:50.288 }, 00:22:50.288 { 00:22:50.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.288 "dma_device_type": 2 00:22:50.288 } 00:22:50.288 ], 00:22:50.288 "driver_specific": {} 00:22:50.288 } 00:22:50.288 ] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 BaseBdev3 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.288 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.288 [ 00:22:50.288 { 00:22:50.288 "name": "BaseBdev3", 00:22:50.288 "aliases": [ 00:22:50.288 "0911d2a1-09a7-4518-88db-002863151364" 00:22:50.288 ], 00:22:50.288 "product_name": "Malloc disk", 00:22:50.288 "block_size": 512, 00:22:50.288 "num_blocks": 65536, 00:22:50.288 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:50.288 "assigned_rate_limits": { 00:22:50.288 "rw_ios_per_sec": 0, 00:22:50.288 "rw_mbytes_per_sec": 0, 00:22:50.288 "r_mbytes_per_sec": 0, 00:22:50.288 "w_mbytes_per_sec": 0 00:22:50.288 }, 00:22:50.288 "claimed": false, 00:22:50.288 "zoned": false, 00:22:50.288 "supported_io_types": { 00:22:50.288 "read": true, 00:22:50.288 "write": true, 00:22:50.288 "unmap": true, 00:22:50.288 "flush": true, 00:22:50.288 "reset": true, 00:22:50.288 "nvme_admin": false, 00:22:50.288 "nvme_io": false, 00:22:50.288 "nvme_io_md": false, 00:22:50.288 "write_zeroes": true, 00:22:50.288 "zcopy": true, 00:22:50.288 "get_zone_info": false, 00:22:50.288 "zone_management": false, 00:22:50.288 "zone_append": false, 00:22:50.288 "compare": false, 00:22:50.288 "compare_and_write": false, 00:22:50.288 "abort": true, 00:22:50.288 "seek_hole": false, 00:22:50.288 "seek_data": false, 00:22:50.288 "copy": true, 00:22:50.288 "nvme_iov_md": false 00:22:50.288 }, 00:22:50.288 "memory_domains": [ 00:22:50.288 { 00:22:50.288 "dma_device_id": "system", 00:22:50.288 "dma_device_type": 1 00:22:50.288 }, 00:22:50.288 { 00:22:50.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.288 "dma_device_type": 2 00:22:50.288 } 00:22:50.288 ], 00:22:50.288 "driver_specific": {} 00:22:50.288 } 00:22:50.289 ] 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.289 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.548 BaseBdev4 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.548 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.548 [ 00:22:50.548 { 00:22:50.548 "name": "BaseBdev4", 00:22:50.548 "aliases": [ 00:22:50.548 "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503" 00:22:50.548 ], 00:22:50.548 "product_name": "Malloc disk", 00:22:50.548 "block_size": 512, 00:22:50.548 "num_blocks": 65536, 00:22:50.548 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:50.548 "assigned_rate_limits": { 00:22:50.548 "rw_ios_per_sec": 0, 00:22:50.548 "rw_mbytes_per_sec": 0, 00:22:50.548 "r_mbytes_per_sec": 0, 00:22:50.548 "w_mbytes_per_sec": 0 00:22:50.548 }, 00:22:50.548 "claimed": false, 00:22:50.548 "zoned": false, 00:22:50.548 "supported_io_types": { 00:22:50.548 "read": true, 00:22:50.548 "write": true, 00:22:50.548 "unmap": true, 00:22:50.548 "flush": true, 00:22:50.548 "reset": true, 00:22:50.548 "nvme_admin": false, 00:22:50.548 "nvme_io": false, 00:22:50.548 "nvme_io_md": false, 00:22:50.548 "write_zeroes": true, 00:22:50.548 "zcopy": true, 00:22:50.548 "get_zone_info": false, 00:22:50.548 "zone_management": false, 00:22:50.548 "zone_append": false, 00:22:50.548 "compare": false, 00:22:50.548 "compare_and_write": false, 00:22:50.548 "abort": true, 00:22:50.548 "seek_hole": false, 00:22:50.548 "seek_data": false, 00:22:50.548 "copy": true, 00:22:50.548 "nvme_iov_md": false 00:22:50.548 }, 00:22:50.548 "memory_domains": [ 00:22:50.548 { 00:22:50.548 "dma_device_id": "system", 00:22:50.548 "dma_device_type": 1 00:22:50.548 }, 00:22:50.548 { 00:22:50.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.548 "dma_device_type": 2 00:22:50.548 } 00:22:50.548 ], 00:22:50.548 "driver_specific": {} 00:22:50.548 } 00:22:50.549 ] 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.549 [2024-10-07 07:43:49.906515] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:50.549 [2024-10-07 07:43:49.906727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:50.549 [2024-10-07 07:43:49.906849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.549 [2024-10-07 07:43:49.909276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.549 [2024-10-07 07:43:49.909466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.549 "name": "Existed_Raid", 00:22:50.549 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:50.549 "strip_size_kb": 64, 00:22:50.549 "state": "configuring", 00:22:50.549 "raid_level": "concat", 00:22:50.549 "superblock": true, 00:22:50.549 "num_base_bdevs": 4, 00:22:50.549 "num_base_bdevs_discovered": 3, 00:22:50.549 "num_base_bdevs_operational": 4, 00:22:50.549 "base_bdevs_list": [ 00:22:50.549 { 00:22:50.549 "name": "BaseBdev1", 00:22:50.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.549 "is_configured": false, 00:22:50.549 "data_offset": 0, 00:22:50.549 "data_size": 0 00:22:50.549 }, 00:22:50.549 { 00:22:50.549 "name": "BaseBdev2", 00:22:50.549 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:50.549 "is_configured": true, 00:22:50.549 "data_offset": 2048, 00:22:50.549 "data_size": 63488 00:22:50.549 }, 00:22:50.549 { 00:22:50.549 "name": "BaseBdev3", 00:22:50.549 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:50.549 "is_configured": true, 00:22:50.549 "data_offset": 2048, 00:22:50.549 "data_size": 63488 00:22:50.549 }, 00:22:50.549 { 00:22:50.549 "name": "BaseBdev4", 00:22:50.549 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:50.549 "is_configured": true, 00:22:50.549 "data_offset": 2048, 00:22:50.549 "data_size": 63488 00:22:50.549 } 00:22:50.549 ] 00:22:50.549 }' 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.549 07:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.808 [2024-10-07 07:43:50.314552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.808 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.067 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.067 "name": "Existed_Raid", 00:22:51.067 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:51.067 "strip_size_kb": 64, 00:22:51.067 "state": "configuring", 00:22:51.067 "raid_level": "concat", 00:22:51.067 "superblock": true, 00:22:51.067 "num_base_bdevs": 4, 00:22:51.067 "num_base_bdevs_discovered": 2, 00:22:51.067 "num_base_bdevs_operational": 4, 00:22:51.067 "base_bdevs_list": [ 00:22:51.067 { 00:22:51.067 "name": "BaseBdev1", 00:22:51.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.067 "is_configured": false, 00:22:51.067 "data_offset": 0, 00:22:51.067 "data_size": 0 00:22:51.067 }, 00:22:51.067 { 00:22:51.067 "name": null, 00:22:51.067 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:51.067 "is_configured": false, 00:22:51.067 "data_offset": 0, 00:22:51.067 "data_size": 63488 00:22:51.067 }, 00:22:51.067 { 00:22:51.067 "name": "BaseBdev3", 00:22:51.067 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:51.067 "is_configured": true, 00:22:51.067 "data_offset": 2048, 00:22:51.067 "data_size": 63488 00:22:51.067 }, 00:22:51.067 { 00:22:51.067 "name": "BaseBdev4", 00:22:51.067 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:51.067 "is_configured": true, 00:22:51.067 "data_offset": 2048, 00:22:51.067 "data_size": 63488 00:22:51.067 } 00:22:51.067 ] 00:22:51.067 }' 00:22:51.067 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.067 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.326 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 [2024-10-07 07:43:50.895575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.585 BaseBdev1 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 [ 00:22:51.585 { 00:22:51.585 "name": "BaseBdev1", 00:22:51.585 "aliases": [ 00:22:51.585 "d06205ca-1710-43a5-86e0-f522935bf3f4" 00:22:51.585 ], 00:22:51.585 "product_name": "Malloc disk", 00:22:51.585 "block_size": 512, 00:22:51.585 "num_blocks": 65536, 00:22:51.585 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:51.585 "assigned_rate_limits": { 00:22:51.585 "rw_ios_per_sec": 0, 00:22:51.585 "rw_mbytes_per_sec": 0, 00:22:51.585 "r_mbytes_per_sec": 0, 00:22:51.585 "w_mbytes_per_sec": 0 00:22:51.585 }, 00:22:51.585 "claimed": true, 00:22:51.585 "claim_type": "exclusive_write", 00:22:51.585 "zoned": false, 00:22:51.585 "supported_io_types": { 00:22:51.585 "read": true, 00:22:51.585 "write": true, 00:22:51.585 "unmap": true, 00:22:51.585 "flush": true, 00:22:51.585 "reset": true, 00:22:51.585 "nvme_admin": false, 00:22:51.585 "nvme_io": false, 00:22:51.585 "nvme_io_md": false, 00:22:51.585 "write_zeroes": true, 00:22:51.585 "zcopy": true, 00:22:51.585 "get_zone_info": false, 00:22:51.585 "zone_management": false, 00:22:51.585 "zone_append": false, 00:22:51.585 "compare": false, 00:22:51.585 "compare_and_write": false, 00:22:51.585 "abort": true, 00:22:51.585 "seek_hole": false, 00:22:51.585 "seek_data": false, 00:22:51.585 "copy": true, 00:22:51.585 "nvme_iov_md": false 00:22:51.585 }, 00:22:51.585 "memory_domains": [ 00:22:51.585 { 00:22:51.585 "dma_device_id": "system", 00:22:51.585 "dma_device_type": 1 00:22:51.585 }, 00:22:51.585 { 00:22:51.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.585 "dma_device_type": 2 00:22:51.585 } 00:22:51.585 ], 00:22:51.585 "driver_specific": {} 00:22:51.585 } 00:22:51.585 ] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.585 "name": "Existed_Raid", 00:22:51.585 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:51.585 "strip_size_kb": 64, 00:22:51.585 "state": "configuring", 00:22:51.585 "raid_level": "concat", 00:22:51.585 "superblock": true, 00:22:51.585 "num_base_bdevs": 4, 00:22:51.585 "num_base_bdevs_discovered": 3, 00:22:51.585 "num_base_bdevs_operational": 4, 00:22:51.585 "base_bdevs_list": [ 00:22:51.585 { 00:22:51.585 "name": "BaseBdev1", 00:22:51.585 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:51.585 "is_configured": true, 00:22:51.585 "data_offset": 2048, 00:22:51.585 "data_size": 63488 00:22:51.585 }, 00:22:51.585 { 00:22:51.585 "name": null, 00:22:51.585 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:51.585 "is_configured": false, 00:22:51.585 "data_offset": 0, 00:22:51.585 "data_size": 63488 00:22:51.585 }, 00:22:51.585 { 00:22:51.585 "name": "BaseBdev3", 00:22:51.585 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:51.585 "is_configured": true, 00:22:51.585 "data_offset": 2048, 00:22:51.585 "data_size": 63488 00:22:51.585 }, 00:22:51.585 { 00:22:51.585 "name": "BaseBdev4", 00:22:51.585 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:51.585 "is_configured": true, 00:22:51.585 "data_offset": 2048, 00:22:51.585 "data_size": 63488 00:22:51.585 } 00:22:51.585 ] 00:22:51.585 }' 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.585 07:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.843 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.843 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:51.843 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:51.843 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.101 [2024-10-07 07:43:51.435818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.101 "name": "Existed_Raid", 00:22:52.101 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:52.101 "strip_size_kb": 64, 00:22:52.101 "state": "configuring", 00:22:52.101 "raid_level": "concat", 00:22:52.101 "superblock": true, 00:22:52.101 "num_base_bdevs": 4, 00:22:52.101 "num_base_bdevs_discovered": 2, 00:22:52.101 "num_base_bdevs_operational": 4, 00:22:52.101 "base_bdevs_list": [ 00:22:52.101 { 00:22:52.101 "name": "BaseBdev1", 00:22:52.101 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:52.101 "is_configured": true, 00:22:52.101 "data_offset": 2048, 00:22:52.101 "data_size": 63488 00:22:52.101 }, 00:22:52.101 { 00:22:52.101 "name": null, 00:22:52.101 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:52.101 "is_configured": false, 00:22:52.101 "data_offset": 0, 00:22:52.101 "data_size": 63488 00:22:52.101 }, 00:22:52.101 { 00:22:52.101 "name": null, 00:22:52.101 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:52.101 "is_configured": false, 00:22:52.101 "data_offset": 0, 00:22:52.101 "data_size": 63488 00:22:52.101 }, 00:22:52.101 { 00:22:52.101 "name": "BaseBdev4", 00:22:52.101 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:52.101 "is_configured": true, 00:22:52.101 "data_offset": 2048, 00:22:52.101 "data_size": 63488 00:22:52.101 } 00:22:52.101 ] 00:22:52.101 }' 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.101 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.360 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.360 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.360 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.360 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:52.360 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.620 [2024-10-07 07:43:51.952007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.620 "name": "Existed_Raid", 00:22:52.620 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:52.620 "strip_size_kb": 64, 00:22:52.620 "state": "configuring", 00:22:52.620 "raid_level": "concat", 00:22:52.620 "superblock": true, 00:22:52.620 "num_base_bdevs": 4, 00:22:52.620 "num_base_bdevs_discovered": 3, 00:22:52.620 "num_base_bdevs_operational": 4, 00:22:52.620 "base_bdevs_list": [ 00:22:52.620 { 00:22:52.620 "name": "BaseBdev1", 00:22:52.620 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:52.620 "is_configured": true, 00:22:52.620 "data_offset": 2048, 00:22:52.620 "data_size": 63488 00:22:52.620 }, 00:22:52.620 { 00:22:52.620 "name": null, 00:22:52.620 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:52.620 "is_configured": false, 00:22:52.620 "data_offset": 0, 00:22:52.620 "data_size": 63488 00:22:52.620 }, 00:22:52.620 { 00:22:52.620 "name": "BaseBdev3", 00:22:52.620 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:52.620 "is_configured": true, 00:22:52.620 "data_offset": 2048, 00:22:52.620 "data_size": 63488 00:22:52.620 }, 00:22:52.620 { 00:22:52.620 "name": "BaseBdev4", 00:22:52.620 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:52.620 "is_configured": true, 00:22:52.620 "data_offset": 2048, 00:22:52.620 "data_size": 63488 00:22:52.620 } 00:22:52.620 ] 00:22:52.620 }' 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.620 07:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.878 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.878 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:52.878 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:52.878 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.878 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.137 [2024-10-07 07:43:52.452162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.137 "name": "Existed_Raid", 00:22:53.137 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:53.137 "strip_size_kb": 64, 00:22:53.137 "state": "configuring", 00:22:53.137 "raid_level": "concat", 00:22:53.137 "superblock": true, 00:22:53.137 "num_base_bdevs": 4, 00:22:53.137 "num_base_bdevs_discovered": 2, 00:22:53.137 "num_base_bdevs_operational": 4, 00:22:53.137 "base_bdevs_list": [ 00:22:53.137 { 00:22:53.137 "name": null, 00:22:53.137 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:53.137 "is_configured": false, 00:22:53.137 "data_offset": 0, 00:22:53.137 "data_size": 63488 00:22:53.137 }, 00:22:53.137 { 00:22:53.137 "name": null, 00:22:53.137 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:53.137 "is_configured": false, 00:22:53.137 "data_offset": 0, 00:22:53.137 "data_size": 63488 00:22:53.137 }, 00:22:53.137 { 00:22:53.137 "name": "BaseBdev3", 00:22:53.137 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:53.137 "is_configured": true, 00:22:53.137 "data_offset": 2048, 00:22:53.137 "data_size": 63488 00:22:53.137 }, 00:22:53.137 { 00:22:53.137 "name": "BaseBdev4", 00:22:53.137 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:53.137 "is_configured": true, 00:22:53.137 "data_offset": 2048, 00:22:53.137 "data_size": 63488 00:22:53.137 } 00:22:53.137 ] 00:22:53.137 }' 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.137 07:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.717 [2024-10-07 07:43:53.078215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.717 "name": "Existed_Raid", 00:22:53.717 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:53.717 "strip_size_kb": 64, 00:22:53.717 "state": "configuring", 00:22:53.717 "raid_level": "concat", 00:22:53.717 "superblock": true, 00:22:53.717 "num_base_bdevs": 4, 00:22:53.717 "num_base_bdevs_discovered": 3, 00:22:53.717 "num_base_bdevs_operational": 4, 00:22:53.717 "base_bdevs_list": [ 00:22:53.717 { 00:22:53.717 "name": null, 00:22:53.717 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:53.717 "is_configured": false, 00:22:53.717 "data_offset": 0, 00:22:53.717 "data_size": 63488 00:22:53.717 }, 00:22:53.717 { 00:22:53.717 "name": "BaseBdev2", 00:22:53.717 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:53.717 "is_configured": true, 00:22:53.717 "data_offset": 2048, 00:22:53.717 "data_size": 63488 00:22:53.717 }, 00:22:53.717 { 00:22:53.717 "name": "BaseBdev3", 00:22:53.717 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:53.717 "is_configured": true, 00:22:53.717 "data_offset": 2048, 00:22:53.717 "data_size": 63488 00:22:53.717 }, 00:22:53.717 { 00:22:53.717 "name": "BaseBdev4", 00:22:53.717 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:53.717 "is_configured": true, 00:22:53.717 "data_offset": 2048, 00:22:53.717 "data_size": 63488 00:22:53.717 } 00:22:53.717 ] 00:22:53.717 }' 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.717 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.991 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.991 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:53.991 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.991 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d06205ca-1710-43a5-86e0-f522935bf3f4 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.250 NewBaseBdev 00:22:54.250 [2024-10-07 07:43:53.669157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:54.250 [2024-10-07 07:43:53.669428] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:54.250 [2024-10-07 07:43:53.669444] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:54.250 [2024-10-07 07:43:53.669796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:54.250 [2024-10-07 07:43:53.669993] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:54.250 [2024-10-07 07:43:53.670012] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:54.250 [2024-10-07 07:43:53.670153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.250 [ 00:22:54.250 { 00:22:54.250 "name": "NewBaseBdev", 00:22:54.250 "aliases": [ 00:22:54.250 "d06205ca-1710-43a5-86e0-f522935bf3f4" 00:22:54.250 ], 00:22:54.250 "product_name": "Malloc disk", 00:22:54.250 "block_size": 512, 00:22:54.250 "num_blocks": 65536, 00:22:54.250 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:54.250 "assigned_rate_limits": { 00:22:54.250 "rw_ios_per_sec": 0, 00:22:54.250 "rw_mbytes_per_sec": 0, 00:22:54.250 "r_mbytes_per_sec": 0, 00:22:54.250 "w_mbytes_per_sec": 0 00:22:54.250 }, 00:22:54.250 "claimed": true, 00:22:54.250 "claim_type": "exclusive_write", 00:22:54.250 "zoned": false, 00:22:54.250 "supported_io_types": { 00:22:54.250 "read": true, 00:22:54.250 "write": true, 00:22:54.250 "unmap": true, 00:22:54.250 "flush": true, 00:22:54.250 "reset": true, 00:22:54.250 "nvme_admin": false, 00:22:54.250 "nvme_io": false, 00:22:54.250 "nvme_io_md": false, 00:22:54.250 "write_zeroes": true, 00:22:54.250 "zcopy": true, 00:22:54.250 "get_zone_info": false, 00:22:54.250 "zone_management": false, 00:22:54.250 "zone_append": false, 00:22:54.250 "compare": false, 00:22:54.250 "compare_and_write": false, 00:22:54.250 "abort": true, 00:22:54.250 "seek_hole": false, 00:22:54.250 "seek_data": false, 00:22:54.250 "copy": true, 00:22:54.250 "nvme_iov_md": false 00:22:54.250 }, 00:22:54.250 "memory_domains": [ 00:22:54.250 { 00:22:54.250 "dma_device_id": "system", 00:22:54.250 "dma_device_type": 1 00:22:54.250 }, 00:22:54.250 { 00:22:54.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.250 "dma_device_type": 2 00:22:54.250 } 00:22:54.250 ], 00:22:54.250 "driver_specific": {} 00:22:54.250 } 00:22:54.250 ] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.250 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.251 "name": "Existed_Raid", 00:22:54.251 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:54.251 "strip_size_kb": 64, 00:22:54.251 "state": "online", 00:22:54.251 "raid_level": "concat", 00:22:54.251 "superblock": true, 00:22:54.251 "num_base_bdevs": 4, 00:22:54.251 "num_base_bdevs_discovered": 4, 00:22:54.251 "num_base_bdevs_operational": 4, 00:22:54.251 "base_bdevs_list": [ 00:22:54.251 { 00:22:54.251 "name": "NewBaseBdev", 00:22:54.251 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:54.251 "is_configured": true, 00:22:54.251 "data_offset": 2048, 00:22:54.251 "data_size": 63488 00:22:54.251 }, 00:22:54.251 { 00:22:54.251 "name": "BaseBdev2", 00:22:54.251 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:54.251 "is_configured": true, 00:22:54.251 "data_offset": 2048, 00:22:54.251 "data_size": 63488 00:22:54.251 }, 00:22:54.251 { 00:22:54.251 "name": "BaseBdev3", 00:22:54.251 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:54.251 "is_configured": true, 00:22:54.251 "data_offset": 2048, 00:22:54.251 "data_size": 63488 00:22:54.251 }, 00:22:54.251 { 00:22:54.251 "name": "BaseBdev4", 00:22:54.251 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:54.251 "is_configured": true, 00:22:54.251 "data_offset": 2048, 00:22:54.251 "data_size": 63488 00:22:54.251 } 00:22:54.251 ] 00:22:54.251 }' 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.251 07:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.817 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.818 [2024-10-07 07:43:54.169756] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.818 "name": "Existed_Raid", 00:22:54.818 "aliases": [ 00:22:54.818 "0fb600bb-e184-4542-b69a-8afd37df5e95" 00:22:54.818 ], 00:22:54.818 "product_name": "Raid Volume", 00:22:54.818 "block_size": 512, 00:22:54.818 "num_blocks": 253952, 00:22:54.818 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:54.818 "assigned_rate_limits": { 00:22:54.818 "rw_ios_per_sec": 0, 00:22:54.818 "rw_mbytes_per_sec": 0, 00:22:54.818 "r_mbytes_per_sec": 0, 00:22:54.818 "w_mbytes_per_sec": 0 00:22:54.818 }, 00:22:54.818 "claimed": false, 00:22:54.818 "zoned": false, 00:22:54.818 "supported_io_types": { 00:22:54.818 "read": true, 00:22:54.818 "write": true, 00:22:54.818 "unmap": true, 00:22:54.818 "flush": true, 00:22:54.818 "reset": true, 00:22:54.818 "nvme_admin": false, 00:22:54.818 "nvme_io": false, 00:22:54.818 "nvme_io_md": false, 00:22:54.818 "write_zeroes": true, 00:22:54.818 "zcopy": false, 00:22:54.818 "get_zone_info": false, 00:22:54.818 "zone_management": false, 00:22:54.818 "zone_append": false, 00:22:54.818 "compare": false, 00:22:54.818 "compare_and_write": false, 00:22:54.818 "abort": false, 00:22:54.818 "seek_hole": false, 00:22:54.818 "seek_data": false, 00:22:54.818 "copy": false, 00:22:54.818 "nvme_iov_md": false 00:22:54.818 }, 00:22:54.818 "memory_domains": [ 00:22:54.818 { 00:22:54.818 "dma_device_id": "system", 00:22:54.818 "dma_device_type": 1 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.818 "dma_device_type": 2 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "system", 00:22:54.818 "dma_device_type": 1 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.818 "dma_device_type": 2 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "system", 00:22:54.818 "dma_device_type": 1 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.818 "dma_device_type": 2 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "system", 00:22:54.818 "dma_device_type": 1 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.818 "dma_device_type": 2 00:22:54.818 } 00:22:54.818 ], 00:22:54.818 "driver_specific": { 00:22:54.818 "raid": { 00:22:54.818 "uuid": "0fb600bb-e184-4542-b69a-8afd37df5e95", 00:22:54.818 "strip_size_kb": 64, 00:22:54.818 "state": "online", 00:22:54.818 "raid_level": "concat", 00:22:54.818 "superblock": true, 00:22:54.818 "num_base_bdevs": 4, 00:22:54.818 "num_base_bdevs_discovered": 4, 00:22:54.818 "num_base_bdevs_operational": 4, 00:22:54.818 "base_bdevs_list": [ 00:22:54.818 { 00:22:54.818 "name": "NewBaseBdev", 00:22:54.818 "uuid": "d06205ca-1710-43a5-86e0-f522935bf3f4", 00:22:54.818 "is_configured": true, 00:22:54.818 "data_offset": 2048, 00:22:54.818 "data_size": 63488 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "name": "BaseBdev2", 00:22:54.818 "uuid": "aaa5baef-0c5b-4cba-96f2-5f407fc4d89d", 00:22:54.818 "is_configured": true, 00:22:54.818 "data_offset": 2048, 00:22:54.818 "data_size": 63488 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "name": "BaseBdev3", 00:22:54.818 "uuid": "0911d2a1-09a7-4518-88db-002863151364", 00:22:54.818 "is_configured": true, 00:22:54.818 "data_offset": 2048, 00:22:54.818 "data_size": 63488 00:22:54.818 }, 00:22:54.818 { 00:22:54.818 "name": "BaseBdev4", 00:22:54.818 "uuid": "cb6d6f1c-0a4f-47e8-9ecd-cbe8ceb57503", 00:22:54.818 "is_configured": true, 00:22:54.818 "data_offset": 2048, 00:22:54.818 "data_size": 63488 00:22:54.818 } 00:22:54.818 ] 00:22:54.818 } 00:22:54.818 } 00:22:54.818 }' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:54.818 BaseBdev2 00:22:54.818 BaseBdev3 00:22:54.818 BaseBdev4' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.818 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.078 [2024-10-07 07:43:54.541413] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:55.078 [2024-10-07 07:43:54.541589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.078 [2024-10-07 07:43:54.541706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.078 [2024-10-07 07:43:54.541816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.078 [2024-10-07 07:43:54.541831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72114 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 72114 ']' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 72114 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 72114 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:22:55.078 killing process with pid 72114 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 72114' 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 72114 00:22:55.078 [2024-10-07 07:43:54.587149] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.078 07:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 72114 00:22:55.645 [2024-10-07 07:43:55.010495] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:57.018 07:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:57.018 00:22:57.018 real 0m12.233s 00:22:57.018 user 0m19.513s 00:22:57.018 sys 0m2.118s 00:22:57.018 07:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:22:57.018 07:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.018 ************************************ 00:22:57.018 END TEST raid_state_function_test_sb 00:22:57.018 ************************************ 00:22:57.018 07:43:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:22:57.018 07:43:56 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:22:57.018 07:43:56 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:22:57.018 07:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:57.018 ************************************ 00:22:57.018 START TEST raid_superblock_test 00:22:57.018 ************************************ 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test concat 4 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72790 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72790 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 72790 ']' 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:22:57.018 07:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.018 [2024-10-07 07:43:56.529224] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:22:57.018 [2024-10-07 07:43:56.529630] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72790 ] 00:22:57.276 [2024-10-07 07:43:56.713557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.534 [2024-10-07 07:43:56.936843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.790 [2024-10-07 07:43:57.154009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.790 [2024-10-07 07:43:57.154260] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.048 malloc1 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.048 [2024-10-07 07:43:57.591845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:58.048 [2024-10-07 07:43:57.592036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.048 [2024-10-07 07:43:57.592100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:58.048 [2024-10-07 07:43:57.592187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.048 [2024-10-07 07:43:57.594665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.048 [2024-10-07 07:43:57.594827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:58.048 pt1 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.048 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 malloc2 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 [2024-10-07 07:43:57.657132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:58.306 [2024-10-07 07:43:57.657346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.306 [2024-10-07 07:43:57.657411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:58.306 [2024-10-07 07:43:57.657492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.306 [2024-10-07 07:43:57.659979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.306 [2024-10-07 07:43:57.660019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:58.306 pt2 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 malloc3 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 [2024-10-07 07:43:57.711172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:58.306 [2024-10-07 07:43:57.711351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.306 [2024-10-07 07:43:57.711430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:58.306 [2024-10-07 07:43:57.711513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.306 [2024-10-07 07:43:57.714128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.306 [2024-10-07 07:43:57.714263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:58.306 pt3 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 malloc4 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 [2024-10-07 07:43:57.769117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:58.306 [2024-10-07 07:43:57.769188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.306 [2024-10-07 07:43:57.769214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:58.306 [2024-10-07 07:43:57.769226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.306 [2024-10-07 07:43:57.771854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.306 [2024-10-07 07:43:57.771893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:58.306 pt4 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.306 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.306 [2024-10-07 07:43:57.777183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:58.307 [2024-10-07 07:43:57.779379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:58.307 [2024-10-07 07:43:57.779556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:58.307 [2024-10-07 07:43:57.779658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:58.307 [2024-10-07 07:43:57.779954] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:58.307 [2024-10-07 07:43:57.780065] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:58.307 [2024-10-07 07:43:57.780399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:58.307 [2024-10-07 07:43:57.780611] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:58.307 [2024-10-07 07:43:57.780632] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:58.307 [2024-10-07 07:43:57.780829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.307 "name": "raid_bdev1", 00:22:58.307 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:22:58.307 "strip_size_kb": 64, 00:22:58.307 "state": "online", 00:22:58.307 "raid_level": "concat", 00:22:58.307 "superblock": true, 00:22:58.307 "num_base_bdevs": 4, 00:22:58.307 "num_base_bdevs_discovered": 4, 00:22:58.307 "num_base_bdevs_operational": 4, 00:22:58.307 "base_bdevs_list": [ 00:22:58.307 { 00:22:58.307 "name": "pt1", 00:22:58.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.307 "is_configured": true, 00:22:58.307 "data_offset": 2048, 00:22:58.307 "data_size": 63488 00:22:58.307 }, 00:22:58.307 { 00:22:58.307 "name": "pt2", 00:22:58.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.307 "is_configured": true, 00:22:58.307 "data_offset": 2048, 00:22:58.307 "data_size": 63488 00:22:58.307 }, 00:22:58.307 { 00:22:58.307 "name": "pt3", 00:22:58.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.307 "is_configured": true, 00:22:58.307 "data_offset": 2048, 00:22:58.307 "data_size": 63488 00:22:58.307 }, 00:22:58.307 { 00:22:58.307 "name": "pt4", 00:22:58.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.307 "is_configured": true, 00:22:58.307 "data_offset": 2048, 00:22:58.307 "data_size": 63488 00:22:58.307 } 00:22:58.307 ] 00:22:58.307 }' 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.307 07:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.873 [2024-10-07 07:43:58.221571] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.873 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.873 "name": "raid_bdev1", 00:22:58.873 "aliases": [ 00:22:58.873 "304089b5-83b3-4800-add4-808656e768b4" 00:22:58.873 ], 00:22:58.873 "product_name": "Raid Volume", 00:22:58.873 "block_size": 512, 00:22:58.873 "num_blocks": 253952, 00:22:58.873 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:22:58.873 "assigned_rate_limits": { 00:22:58.873 "rw_ios_per_sec": 0, 00:22:58.873 "rw_mbytes_per_sec": 0, 00:22:58.873 "r_mbytes_per_sec": 0, 00:22:58.873 "w_mbytes_per_sec": 0 00:22:58.873 }, 00:22:58.873 "claimed": false, 00:22:58.873 "zoned": false, 00:22:58.873 "supported_io_types": { 00:22:58.873 "read": true, 00:22:58.873 "write": true, 00:22:58.873 "unmap": true, 00:22:58.873 "flush": true, 00:22:58.873 "reset": true, 00:22:58.873 "nvme_admin": false, 00:22:58.873 "nvme_io": false, 00:22:58.873 "nvme_io_md": false, 00:22:58.873 "write_zeroes": true, 00:22:58.873 "zcopy": false, 00:22:58.873 "get_zone_info": false, 00:22:58.873 "zone_management": false, 00:22:58.873 "zone_append": false, 00:22:58.873 "compare": false, 00:22:58.873 "compare_and_write": false, 00:22:58.873 "abort": false, 00:22:58.873 "seek_hole": false, 00:22:58.873 "seek_data": false, 00:22:58.873 "copy": false, 00:22:58.873 "nvme_iov_md": false 00:22:58.873 }, 00:22:58.873 "memory_domains": [ 00:22:58.873 { 00:22:58.873 "dma_device_id": "system", 00:22:58.873 "dma_device_type": 1 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.873 "dma_device_type": 2 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "system", 00:22:58.873 "dma_device_type": 1 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.873 "dma_device_type": 2 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "system", 00:22:58.873 "dma_device_type": 1 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.873 "dma_device_type": 2 00:22:58.873 }, 00:22:58.873 { 00:22:58.873 "dma_device_id": "system", 00:22:58.873 "dma_device_type": 1 00:22:58.873 }, 00:22:58.873 { 00:22:58.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.874 "dma_device_type": 2 00:22:58.874 } 00:22:58.874 ], 00:22:58.874 "driver_specific": { 00:22:58.874 "raid": { 00:22:58.874 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:22:58.874 "strip_size_kb": 64, 00:22:58.874 "state": "online", 00:22:58.874 "raid_level": "concat", 00:22:58.874 "superblock": true, 00:22:58.874 "num_base_bdevs": 4, 00:22:58.874 "num_base_bdevs_discovered": 4, 00:22:58.874 "num_base_bdevs_operational": 4, 00:22:58.874 "base_bdevs_list": [ 00:22:58.874 { 00:22:58.874 "name": "pt1", 00:22:58.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.874 "is_configured": true, 00:22:58.874 "data_offset": 2048, 00:22:58.874 "data_size": 63488 00:22:58.874 }, 00:22:58.874 { 00:22:58.874 "name": "pt2", 00:22:58.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.874 "is_configured": true, 00:22:58.874 "data_offset": 2048, 00:22:58.874 "data_size": 63488 00:22:58.874 }, 00:22:58.874 { 00:22:58.874 "name": "pt3", 00:22:58.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.874 "is_configured": true, 00:22:58.874 "data_offset": 2048, 00:22:58.874 "data_size": 63488 00:22:58.874 }, 00:22:58.874 { 00:22:58.874 "name": "pt4", 00:22:58.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.874 "is_configured": true, 00:22:58.874 "data_offset": 2048, 00:22:58.874 "data_size": 63488 00:22:58.874 } 00:22:58.874 ] 00:22:58.874 } 00:22:58.874 } 00:22:58.874 }' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:58.874 pt2 00:22:58.874 pt3 00:22:58.874 pt4' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.874 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.132 [2024-10-07 07:43:58.529617] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=304089b5-83b3-4800-add4-808656e768b4 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 304089b5-83b3-4800-add4-808656e768b4 ']' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.132 [2024-10-07 07:43:58.569277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.132 [2024-10-07 07:43:58.569424] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.132 [2024-10-07 07:43:58.569658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.132 [2024-10-07 07:43:58.569761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.132 [2024-10-07 07:43:58.569787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:59.132 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.133 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.391 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.391 [2024-10-07 07:43:58.713325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:59.391 [2024-10-07 07:43:58.715677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:59.391 [2024-10-07 07:43:58.715857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:59.391 [2024-10-07 07:43:58.715904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:59.391 [2024-10-07 07:43:58.715955] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:59.391 [2024-10-07 07:43:58.716008] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:59.391 [2024-10-07 07:43:58.716031] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:59.392 [2024-10-07 07:43:58.716052] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:59.392 [2024-10-07 07:43:58.716069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.392 [2024-10-07 07:43:58.716082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:59.392 request: 00:22:59.392 { 00:22:59.392 "name": "raid_bdev1", 00:22:59.392 "raid_level": "concat", 00:22:59.392 "base_bdevs": [ 00:22:59.392 "malloc1", 00:22:59.392 "malloc2", 00:22:59.392 "malloc3", 00:22:59.392 "malloc4" 00:22:59.392 ], 00:22:59.392 "strip_size_kb": 64, 00:22:59.392 "superblock": false, 00:22:59.392 "method": "bdev_raid_create", 00:22:59.392 "req_id": 1 00:22:59.392 } 00:22:59.392 Got JSON-RPC error response 00:22:59.392 response: 00:22:59.392 { 00:22:59.392 "code": -17, 00:22:59.392 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:59.392 } 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.392 [2024-10-07 07:43:58.773381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:59.392 [2024-10-07 07:43:58.773625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.392 [2024-10-07 07:43:58.773656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:59.392 [2024-10-07 07:43:58.773674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.392 [2024-10-07 07:43:58.776295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.392 [2024-10-07 07:43:58.776343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:59.392 [2024-10-07 07:43:58.776434] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:59.392 [2024-10-07 07:43:58.776506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:59.392 pt1 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.392 "name": "raid_bdev1", 00:22:59.392 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:22:59.392 "strip_size_kb": 64, 00:22:59.392 "state": "configuring", 00:22:59.392 "raid_level": "concat", 00:22:59.392 "superblock": true, 00:22:59.392 "num_base_bdevs": 4, 00:22:59.392 "num_base_bdevs_discovered": 1, 00:22:59.392 "num_base_bdevs_operational": 4, 00:22:59.392 "base_bdevs_list": [ 00:22:59.392 { 00:22:59.392 "name": "pt1", 00:22:59.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.392 "is_configured": true, 00:22:59.392 "data_offset": 2048, 00:22:59.392 "data_size": 63488 00:22:59.392 }, 00:22:59.392 { 00:22:59.392 "name": null, 00:22:59.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.392 "is_configured": false, 00:22:59.392 "data_offset": 2048, 00:22:59.392 "data_size": 63488 00:22:59.392 }, 00:22:59.392 { 00:22:59.392 "name": null, 00:22:59.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.392 "is_configured": false, 00:22:59.392 "data_offset": 2048, 00:22:59.392 "data_size": 63488 00:22:59.392 }, 00:22:59.392 { 00:22:59.392 "name": null, 00:22:59.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.392 "is_configured": false, 00:22:59.392 "data_offset": 2048, 00:22:59.392 "data_size": 63488 00:22:59.392 } 00:22:59.392 ] 00:22:59.392 }' 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.392 07:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.984 [2024-10-07 07:43:59.237421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.984 [2024-10-07 07:43:59.237628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.984 [2024-10-07 07:43:59.237758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:59.984 [2024-10-07 07:43:59.237861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.984 [2024-10-07 07:43:59.238365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.984 [2024-10-07 07:43:59.238526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.984 [2024-10-07 07:43:59.238725] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.984 [2024-10-07 07:43:59.238853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.984 pt2 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.984 [2024-10-07 07:43:59.245439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.984 "name": "raid_bdev1", 00:22:59.984 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:22:59.984 "strip_size_kb": 64, 00:22:59.984 "state": "configuring", 00:22:59.984 "raid_level": "concat", 00:22:59.984 "superblock": true, 00:22:59.984 "num_base_bdevs": 4, 00:22:59.984 "num_base_bdevs_discovered": 1, 00:22:59.984 "num_base_bdevs_operational": 4, 00:22:59.984 "base_bdevs_list": [ 00:22:59.984 { 00:22:59.984 "name": "pt1", 00:22:59.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.984 "is_configured": true, 00:22:59.984 "data_offset": 2048, 00:22:59.984 "data_size": 63488 00:22:59.984 }, 00:22:59.984 { 00:22:59.984 "name": null, 00:22:59.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.984 "is_configured": false, 00:22:59.984 "data_offset": 0, 00:22:59.984 "data_size": 63488 00:22:59.984 }, 00:22:59.984 { 00:22:59.984 "name": null, 00:22:59.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.984 "is_configured": false, 00:22:59.984 "data_offset": 2048, 00:22:59.984 "data_size": 63488 00:22:59.984 }, 00:22:59.984 { 00:22:59.984 "name": null, 00:22:59.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.984 "is_configured": false, 00:22:59.984 "data_offset": 2048, 00:22:59.984 "data_size": 63488 00:22:59.984 } 00:22:59.984 ] 00:22:59.984 }' 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.984 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.244 [2024-10-07 07:43:59.693553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:00.244 [2024-10-07 07:43:59.693744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.244 [2024-10-07 07:43:59.693849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:00.244 [2024-10-07 07:43:59.693869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.244 [2024-10-07 07:43:59.694328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.244 [2024-10-07 07:43:59.694356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:00.244 [2024-10-07 07:43:59.694447] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:00.244 [2024-10-07 07:43:59.694485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:00.244 pt2 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.244 [2024-10-07 07:43:59.701515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:00.244 [2024-10-07 07:43:59.701568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.244 [2024-10-07 07:43:59.701596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:00.244 [2024-10-07 07:43:59.701609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.244 [2024-10-07 07:43:59.702017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.244 [2024-10-07 07:43:59.702040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:00.244 [2024-10-07 07:43:59.702110] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:00.244 [2024-10-07 07:43:59.702130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:00.244 pt3 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.244 [2024-10-07 07:43:59.709488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:00.244 [2024-10-07 07:43:59.709689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.244 [2024-10-07 07:43:59.709822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:00.244 [2024-10-07 07:43:59.709954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.244 [2024-10-07 07:43:59.710411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.244 [2024-10-07 07:43:59.710545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:00.244 [2024-10-07 07:43:59.710729] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:00.244 [2024-10-07 07:43:59.710851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:00.244 [2024-10-07 07:43:59.711040] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:00.244 [2024-10-07 07:43:59.711136] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:00.244 [2024-10-07 07:43:59.711462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:00.244 [2024-10-07 07:43:59.711701] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:00.244 [2024-10-07 07:43:59.711812] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:00.244 [2024-10-07 07:43:59.712051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.244 pt4 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.244 "name": "raid_bdev1", 00:23:00.244 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:23:00.244 "strip_size_kb": 64, 00:23:00.244 "state": "online", 00:23:00.244 "raid_level": "concat", 00:23:00.244 "superblock": true, 00:23:00.244 "num_base_bdevs": 4, 00:23:00.244 "num_base_bdevs_discovered": 4, 00:23:00.244 "num_base_bdevs_operational": 4, 00:23:00.244 "base_bdevs_list": [ 00:23:00.244 { 00:23:00.244 "name": "pt1", 00:23:00.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.244 "is_configured": true, 00:23:00.244 "data_offset": 2048, 00:23:00.244 "data_size": 63488 00:23:00.244 }, 00:23:00.244 { 00:23:00.244 "name": "pt2", 00:23:00.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.244 "is_configured": true, 00:23:00.244 "data_offset": 2048, 00:23:00.244 "data_size": 63488 00:23:00.244 }, 00:23:00.244 { 00:23:00.244 "name": "pt3", 00:23:00.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.244 "is_configured": true, 00:23:00.244 "data_offset": 2048, 00:23:00.244 "data_size": 63488 00:23:00.244 }, 00:23:00.244 { 00:23:00.244 "name": "pt4", 00:23:00.244 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.244 "is_configured": true, 00:23:00.244 "data_offset": 2048, 00:23:00.244 "data_size": 63488 00:23:00.244 } 00:23:00.244 ] 00:23:00.244 }' 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.244 07:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:00.503 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.762 [2024-10-07 07:44:00.069952] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.762 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.762 "name": "raid_bdev1", 00:23:00.762 "aliases": [ 00:23:00.762 "304089b5-83b3-4800-add4-808656e768b4" 00:23:00.762 ], 00:23:00.762 "product_name": "Raid Volume", 00:23:00.762 "block_size": 512, 00:23:00.762 "num_blocks": 253952, 00:23:00.762 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:23:00.762 "assigned_rate_limits": { 00:23:00.762 "rw_ios_per_sec": 0, 00:23:00.762 "rw_mbytes_per_sec": 0, 00:23:00.762 "r_mbytes_per_sec": 0, 00:23:00.762 "w_mbytes_per_sec": 0 00:23:00.762 }, 00:23:00.762 "claimed": false, 00:23:00.762 "zoned": false, 00:23:00.762 "supported_io_types": { 00:23:00.762 "read": true, 00:23:00.762 "write": true, 00:23:00.762 "unmap": true, 00:23:00.762 "flush": true, 00:23:00.762 "reset": true, 00:23:00.762 "nvme_admin": false, 00:23:00.762 "nvme_io": false, 00:23:00.762 "nvme_io_md": false, 00:23:00.762 "write_zeroes": true, 00:23:00.762 "zcopy": false, 00:23:00.762 "get_zone_info": false, 00:23:00.762 "zone_management": false, 00:23:00.762 "zone_append": false, 00:23:00.762 "compare": false, 00:23:00.762 "compare_and_write": false, 00:23:00.762 "abort": false, 00:23:00.762 "seek_hole": false, 00:23:00.762 "seek_data": false, 00:23:00.762 "copy": false, 00:23:00.762 "nvme_iov_md": false 00:23:00.762 }, 00:23:00.763 "memory_domains": [ 00:23:00.763 { 00:23:00.763 "dma_device_id": "system", 00:23:00.763 "dma_device_type": 1 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.763 "dma_device_type": 2 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "system", 00:23:00.763 "dma_device_type": 1 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.763 "dma_device_type": 2 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "system", 00:23:00.763 "dma_device_type": 1 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.763 "dma_device_type": 2 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "system", 00:23:00.763 "dma_device_type": 1 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.763 "dma_device_type": 2 00:23:00.763 } 00:23:00.763 ], 00:23:00.763 "driver_specific": { 00:23:00.763 "raid": { 00:23:00.763 "uuid": "304089b5-83b3-4800-add4-808656e768b4", 00:23:00.763 "strip_size_kb": 64, 00:23:00.763 "state": "online", 00:23:00.763 "raid_level": "concat", 00:23:00.763 "superblock": true, 00:23:00.763 "num_base_bdevs": 4, 00:23:00.763 "num_base_bdevs_discovered": 4, 00:23:00.763 "num_base_bdevs_operational": 4, 00:23:00.763 "base_bdevs_list": [ 00:23:00.763 { 00:23:00.763 "name": "pt1", 00:23:00.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.763 "is_configured": true, 00:23:00.763 "data_offset": 2048, 00:23:00.763 "data_size": 63488 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "name": "pt2", 00:23:00.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.763 "is_configured": true, 00:23:00.763 "data_offset": 2048, 00:23:00.763 "data_size": 63488 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "name": "pt3", 00:23:00.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.763 "is_configured": true, 00:23:00.763 "data_offset": 2048, 00:23:00.763 "data_size": 63488 00:23:00.763 }, 00:23:00.763 { 00:23:00.763 "name": "pt4", 00:23:00.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.763 "is_configured": true, 00:23:00.763 "data_offset": 2048, 00:23:00.763 "data_size": 63488 00:23:00.763 } 00:23:00.763 ] 00:23:00.763 } 00:23:00.763 } 00:23:00.763 }' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:00.763 pt2 00:23:00.763 pt3 00:23:00.763 pt4' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.763 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.022 [2024-10-07 07:44:00.398007] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 304089b5-83b3-4800-add4-808656e768b4 '!=' 304089b5-83b3-4800-add4-808656e768b4 ']' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72790 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 72790 ']' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 72790 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 72790 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 72790' 00:23:01.022 killing process with pid 72790 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 72790 00:23:01.022 [2024-10-07 07:44:00.472850] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.022 07:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 72790 00:23:01.022 [2024-10-07 07:44:00.473086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.022 [2024-10-07 07:44:00.473247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.022 [2024-10-07 07:44:00.473356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:01.588 [2024-10-07 07:44:00.890916] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.966 07:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:02.966 ************************************ 00:23:02.966 END TEST raid_superblock_test 00:23:02.966 ************************************ 00:23:02.966 00:23:02.966 real 0m5.795s 00:23:02.966 user 0m8.294s 00:23:02.966 sys 0m1.019s 00:23:02.966 07:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:02.966 07:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.966 07:44:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:23:02.966 07:44:02 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:02.966 07:44:02 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:02.966 07:44:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.966 ************************************ 00:23:02.966 START TEST raid_read_error_test 00:23:02.966 ************************************ 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 4 read 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g67orkv3vt 00:23:02.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73055 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73055 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 73055 ']' 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:02.966 07:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.966 [2024-10-07 07:44:02.395833] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:02.966 [2024-10-07 07:44:02.396009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73055 ] 00:23:03.225 [2024-10-07 07:44:02.580026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.483 [2024-10-07 07:44:02.801469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.483 [2024-10-07 07:44:03.007390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.483 [2024-10-07 07:44:03.007425] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.050 BaseBdev1_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.050 true 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.050 [2024-10-07 07:44:03.365411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:04.050 [2024-10-07 07:44:03.365592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.050 [2024-10-07 07:44:03.365640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:04.050 [2024-10-07 07:44:03.365668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.050 [2024-10-07 07:44:03.368121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.050 [2024-10-07 07:44:03.368162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:04.050 BaseBdev1 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.050 BaseBdev2_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.050 true 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.050 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 [2024-10-07 07:44:03.440386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:04.051 [2024-10-07 07:44:03.440446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.051 [2024-10-07 07:44:03.440466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:04.051 [2024-10-07 07:44:03.440480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.051 [2024-10-07 07:44:03.442852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.051 [2024-10-07 07:44:03.442896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:04.051 BaseBdev2 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 BaseBdev3_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 true 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 [2024-10-07 07:44:03.501582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:04.051 [2024-10-07 07:44:03.501638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.051 [2024-10-07 07:44:03.501659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:04.051 [2024-10-07 07:44:03.501673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.051 [2024-10-07 07:44:03.504053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.051 [2024-10-07 07:44:03.504093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:04.051 BaseBdev3 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 BaseBdev4_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 true 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 [2024-10-07 07:44:03.562821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:04.051 [2024-10-07 07:44:03.562875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.051 [2024-10-07 07:44:03.562896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:04.051 [2024-10-07 07:44:03.562910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.051 [2024-10-07 07:44:03.565281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.051 [2024-10-07 07:44:03.565327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:04.051 BaseBdev4 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 [2024-10-07 07:44:03.570902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.051 [2024-10-07 07:44:03.572965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.051 [2024-10-07 07:44:03.573041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:04.051 [2024-10-07 07:44:03.573101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:04.051 [2024-10-07 07:44:03.573337] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:04.051 [2024-10-07 07:44:03.573360] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:04.051 [2024-10-07 07:44:03.573621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:04.051 [2024-10-07 07:44:03.573808] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:04.051 [2024-10-07 07:44:03.573824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:04.051 [2024-10-07 07:44:03.573997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:04.311 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.311 "name": "raid_bdev1", 00:23:04.311 "uuid": "eb70a2c1-63ae-4d79-bad6-99654123aee5", 00:23:04.311 "strip_size_kb": 64, 00:23:04.311 "state": "online", 00:23:04.311 "raid_level": "concat", 00:23:04.311 "superblock": true, 00:23:04.311 "num_base_bdevs": 4, 00:23:04.311 "num_base_bdevs_discovered": 4, 00:23:04.311 "num_base_bdevs_operational": 4, 00:23:04.311 "base_bdevs_list": [ 00:23:04.311 { 00:23:04.311 "name": "BaseBdev1", 00:23:04.311 "uuid": "c86ff9de-d5b4-5609-b945-803a2df98020", 00:23:04.311 "is_configured": true, 00:23:04.311 "data_offset": 2048, 00:23:04.311 "data_size": 63488 00:23:04.311 }, 00:23:04.311 { 00:23:04.311 "name": "BaseBdev2", 00:23:04.311 "uuid": "bbd32680-e9be-5f8a-a27f-8008209154f9", 00:23:04.311 "is_configured": true, 00:23:04.311 "data_offset": 2048, 00:23:04.311 "data_size": 63488 00:23:04.311 }, 00:23:04.311 { 00:23:04.311 "name": "BaseBdev3", 00:23:04.311 "uuid": "62d799eb-c1a5-56e9-b2ab-29fcb9a3bcd7", 00:23:04.311 "is_configured": true, 00:23:04.311 "data_offset": 2048, 00:23:04.311 "data_size": 63488 00:23:04.311 }, 00:23:04.311 { 00:23:04.311 "name": "BaseBdev4", 00:23:04.311 "uuid": "9b45706e-27cd-55ab-8faf-80643f748893", 00:23:04.311 "is_configured": true, 00:23:04.311 "data_offset": 2048, 00:23:04.311 "data_size": 63488 00:23:04.311 } 00:23:04.311 ] 00:23:04.311 }' 00:23:04.311 07:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.311 07:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.601 07:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:04.601 07:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:04.601 [2024-10-07 07:44:04.148322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.535 "name": "raid_bdev1", 00:23:05.535 "uuid": "eb70a2c1-63ae-4d79-bad6-99654123aee5", 00:23:05.535 "strip_size_kb": 64, 00:23:05.535 "state": "online", 00:23:05.535 "raid_level": "concat", 00:23:05.535 "superblock": true, 00:23:05.535 "num_base_bdevs": 4, 00:23:05.535 "num_base_bdevs_discovered": 4, 00:23:05.535 "num_base_bdevs_operational": 4, 00:23:05.535 "base_bdevs_list": [ 00:23:05.535 { 00:23:05.535 "name": "BaseBdev1", 00:23:05.535 "uuid": "c86ff9de-d5b4-5609-b945-803a2df98020", 00:23:05.535 "is_configured": true, 00:23:05.535 "data_offset": 2048, 00:23:05.535 "data_size": 63488 00:23:05.535 }, 00:23:05.535 { 00:23:05.535 "name": "BaseBdev2", 00:23:05.535 "uuid": "bbd32680-e9be-5f8a-a27f-8008209154f9", 00:23:05.535 "is_configured": true, 00:23:05.535 "data_offset": 2048, 00:23:05.535 "data_size": 63488 00:23:05.535 }, 00:23:05.535 { 00:23:05.535 "name": "BaseBdev3", 00:23:05.535 "uuid": "62d799eb-c1a5-56e9-b2ab-29fcb9a3bcd7", 00:23:05.535 "is_configured": true, 00:23:05.535 "data_offset": 2048, 00:23:05.535 "data_size": 63488 00:23:05.535 }, 00:23:05.535 { 00:23:05.535 "name": "BaseBdev4", 00:23:05.535 "uuid": "9b45706e-27cd-55ab-8faf-80643f748893", 00:23:05.535 "is_configured": true, 00:23:05.535 "data_offset": 2048, 00:23:05.535 "data_size": 63488 00:23:05.535 } 00:23:05.535 ] 00:23:05.535 }' 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.535 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.102 [2024-10-07 07:44:05.495596] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.102 [2024-10-07 07:44:05.495658] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:06.102 [2024-10-07 07:44:05.498476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:06.102 [2024-10-07 07:44:05.498558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.102 [2024-10-07 07:44:05.498615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:06.102 [2024-10-07 07:44:05.498632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:06.102 { 00:23:06.102 "results": [ 00:23:06.102 { 00:23:06.102 "job": "raid_bdev1", 00:23:06.102 "core_mask": "0x1", 00:23:06.102 "workload": "randrw", 00:23:06.102 "percentage": 50, 00:23:06.102 "status": "finished", 00:23:06.102 "queue_depth": 1, 00:23:06.102 "io_size": 131072, 00:23:06.102 "runtime": 1.345261, 00:23:06.102 "iops": 14891.53405919, 00:23:06.102 "mibps": 1861.44175739875, 00:23:06.102 "io_failed": 1, 00:23:06.102 "io_timeout": 0, 00:23:06.102 "avg_latency_us": 93.32588998702207, 00:23:06.102 "min_latency_us": 27.916190476190476, 00:23:06.102 "max_latency_us": 1544.777142857143 00:23:06.102 } 00:23:06.102 ], 00:23:06.102 "core_count": 1 00:23:06.102 } 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73055 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 73055 ']' 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 73055 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 73055 00:23:06.102 killing process with pid 73055 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 73055' 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 73055 00:23:06.102 [2024-10-07 07:44:05.542938] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:06.102 07:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 73055 00:23:06.668 [2024-10-07 07:44:05.937007] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g67orkv3vt 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:23:08.053 00:23:08.053 real 0m5.302s 00:23:08.053 user 0m6.241s 00:23:08.053 sys 0m0.633s 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:08.053 07:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.053 ************************************ 00:23:08.053 END TEST raid_read_error_test 00:23:08.053 ************************************ 00:23:08.053 07:44:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:23:08.053 07:44:07 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:08.053 07:44:07 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:08.053 07:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:08.312 ************************************ 00:23:08.312 START TEST raid_write_error_test 00:23:08.312 ************************************ 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test concat 4 write 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7eC4jmlDeC 00:23:08.312 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73206 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73206 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 73206 ']' 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:08.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:08.313 07:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.313 [2024-10-07 07:44:07.792614] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:08.313 [2024-10-07 07:44:07.792814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:23:08.571 [2024-10-07 07:44:07.975918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.829 [2024-10-07 07:44:08.268747] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.089 [2024-10-07 07:44:08.537665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:09.089 [2024-10-07 07:44:08.537735] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 BaseBdev1_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 true 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 [2024-10-07 07:44:08.779841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:09.349 [2024-10-07 07:44:08.779926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.349 [2024-10-07 07:44:08.779951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:09.349 [2024-10-07 07:44:08.779968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.349 [2024-10-07 07:44:08.783029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.349 [2024-10-07 07:44:08.783079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:09.349 BaseBdev1 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 BaseBdev2_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 true 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.349 [2024-10-07 07:44:08.857528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:09.349 [2024-10-07 07:44:08.857612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.349 [2024-10-07 07:44:08.857636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:09.349 [2024-10-07 07:44:08.857653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.349 [2024-10-07 07:44:08.860683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.349 [2024-10-07 07:44:08.860742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:09.349 BaseBdev2 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.349 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.607 BaseBdev3_malloc 00:23:09.607 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.607 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 true 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 [2024-10-07 07:44:08.926770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:09.608 [2024-10-07 07:44:08.926847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.608 [2024-10-07 07:44:08.926869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:09.608 [2024-10-07 07:44:08.926886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.608 [2024-10-07 07:44:08.929738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.608 [2024-10-07 07:44:08.929781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:09.608 BaseBdev3 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 BaseBdev4_malloc 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 true 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 [2024-10-07 07:44:08.995966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:09.608 [2024-10-07 07:44:08.996035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.608 [2024-10-07 07:44:08.996057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:09.608 [2024-10-07 07:44:08.996075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.608 [2024-10-07 07:44:08.998889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.608 [2024-10-07 07:44:08.998932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:09.608 BaseBdev4 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 [2024-10-07 07:44:09.004075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.608 [2024-10-07 07:44:09.006631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.608 [2024-10-07 07:44:09.006734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.608 [2024-10-07 07:44:09.006800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:09.608 [2024-10-07 07:44:09.007032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:09.608 [2024-10-07 07:44:09.007048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:09.608 [2024-10-07 07:44:09.007321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:09.608 [2024-10-07 07:44:09.007492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:09.608 [2024-10-07 07:44:09.007503] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:09.608 [2024-10-07 07:44:09.007672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.608 "name": "raid_bdev1", 00:23:09.608 "uuid": "42341843-9a61-4d53-9c1d-29d0fecc0edc", 00:23:09.608 "strip_size_kb": 64, 00:23:09.608 "state": "online", 00:23:09.608 "raid_level": "concat", 00:23:09.608 "superblock": true, 00:23:09.608 "num_base_bdevs": 4, 00:23:09.608 "num_base_bdevs_discovered": 4, 00:23:09.608 "num_base_bdevs_operational": 4, 00:23:09.608 "base_bdevs_list": [ 00:23:09.608 { 00:23:09.608 "name": "BaseBdev1", 00:23:09.608 "uuid": "7fe72be1-31bd-5f2b-a36c-2a157ec76d22", 00:23:09.608 "is_configured": true, 00:23:09.608 "data_offset": 2048, 00:23:09.608 "data_size": 63488 00:23:09.608 }, 00:23:09.608 { 00:23:09.608 "name": "BaseBdev2", 00:23:09.608 "uuid": "aa0585fa-1e6c-5474-a6c5-589261d36715", 00:23:09.608 "is_configured": true, 00:23:09.608 "data_offset": 2048, 00:23:09.608 "data_size": 63488 00:23:09.608 }, 00:23:09.608 { 00:23:09.608 "name": "BaseBdev3", 00:23:09.608 "uuid": "2477e064-9277-5088-a9f8-430037cf0956", 00:23:09.608 "is_configured": true, 00:23:09.608 "data_offset": 2048, 00:23:09.608 "data_size": 63488 00:23:09.608 }, 00:23:09.608 { 00:23:09.608 "name": "BaseBdev4", 00:23:09.608 "uuid": "845836f4-05ad-5dc2-8251-a367708b8144", 00:23:09.608 "is_configured": true, 00:23:09.608 "data_offset": 2048, 00:23:09.608 "data_size": 63488 00:23:09.608 } 00:23:09.608 ] 00:23:09.608 }' 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.608 07:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.173 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:10.173 07:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:10.173 [2024-10-07 07:44:09.630043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:11.108 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.109 "name": "raid_bdev1", 00:23:11.109 "uuid": "42341843-9a61-4d53-9c1d-29d0fecc0edc", 00:23:11.109 "strip_size_kb": 64, 00:23:11.109 "state": "online", 00:23:11.109 "raid_level": "concat", 00:23:11.109 "superblock": true, 00:23:11.109 "num_base_bdevs": 4, 00:23:11.109 "num_base_bdevs_discovered": 4, 00:23:11.109 "num_base_bdevs_operational": 4, 00:23:11.109 "base_bdevs_list": [ 00:23:11.109 { 00:23:11.109 "name": "BaseBdev1", 00:23:11.109 "uuid": "7fe72be1-31bd-5f2b-a36c-2a157ec76d22", 00:23:11.109 "is_configured": true, 00:23:11.109 "data_offset": 2048, 00:23:11.109 "data_size": 63488 00:23:11.109 }, 00:23:11.109 { 00:23:11.109 "name": "BaseBdev2", 00:23:11.109 "uuid": "aa0585fa-1e6c-5474-a6c5-589261d36715", 00:23:11.109 "is_configured": true, 00:23:11.109 "data_offset": 2048, 00:23:11.109 "data_size": 63488 00:23:11.109 }, 00:23:11.109 { 00:23:11.109 "name": "BaseBdev3", 00:23:11.109 "uuid": "2477e064-9277-5088-a9f8-430037cf0956", 00:23:11.109 "is_configured": true, 00:23:11.109 "data_offset": 2048, 00:23:11.109 "data_size": 63488 00:23:11.109 }, 00:23:11.109 { 00:23:11.109 "name": "BaseBdev4", 00:23:11.109 "uuid": "845836f4-05ad-5dc2-8251-a367708b8144", 00:23:11.109 "is_configured": true, 00:23:11.109 "data_offset": 2048, 00:23:11.109 "data_size": 63488 00:23:11.109 } 00:23:11.109 ] 00:23:11.109 }' 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.109 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.675 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:11.675 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:11.675 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.675 [2024-10-07 07:44:10.981762] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.675 [2024-10-07 07:44:10.981807] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.675 [2024-10-07 07:44:10.984885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.675 [2024-10-07 07:44:10.984959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.676 [2024-10-07 07:44:10.985013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.676 [2024-10-07 07:44:10.985030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:11.676 { 00:23:11.676 "results": [ 00:23:11.676 { 00:23:11.676 "job": "raid_bdev1", 00:23:11.676 "core_mask": "0x1", 00:23:11.676 "workload": "randrw", 00:23:11.676 "percentage": 50, 00:23:11.676 "status": "finished", 00:23:11.676 "queue_depth": 1, 00:23:11.676 "io_size": 131072, 00:23:11.676 "runtime": 1.348827, 00:23:11.676 "iops": 14113.74475748187, 00:23:11.676 "mibps": 1764.2180946852338, 00:23:11.676 "io_failed": 1, 00:23:11.676 "io_timeout": 0, 00:23:11.676 "avg_latency_us": 98.34115348250867, 00:23:11.676 "min_latency_us": 27.55047619047619, 00:23:11.676 "max_latency_us": 1435.5504761904763 00:23:11.676 } 00:23:11.676 ], 00:23:11.676 "core_count": 1 00:23:11.676 } 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73206 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 73206 ']' 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 73206 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:11.676 07:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 73206 00:23:11.676 07:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:11.676 killing process with pid 73206 00:23:11.676 07:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:11.676 07:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 73206' 00:23:11.676 07:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 73206 00:23:11.676 [2024-10-07 07:44:11.027351] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:11.676 07:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 73206 00:23:11.932 [2024-10-07 07:44:11.382041] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7eC4jmlDeC 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:23:13.334 00:23:13.334 real 0m5.224s 00:23:13.334 user 0m6.057s 00:23:13.334 sys 0m0.842s 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:13.334 ************************************ 00:23:13.334 END TEST raid_write_error_test 00:23:13.334 ************************************ 00:23:13.334 07:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.592 07:44:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:13.592 07:44:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:23:13.592 07:44:12 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:13.592 07:44:12 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:13.592 07:44:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.592 ************************************ 00:23:13.592 START TEST raid_state_function_test 00:23:13.592 ************************************ 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 4 false 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73350 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:13.592 Process raid pid: 73350 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73350' 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73350 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 73350 ']' 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:13.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:13.592 07:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.592 [2024-10-07 07:44:13.038137] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:13.592 [2024-10-07 07:44:13.038325] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.850 [2024-10-07 07:44:13.226789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.108 [2024-10-07 07:44:13.454642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.365 [2024-10-07 07:44:13.689477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.365 [2024-10-07 07:44:13.689728] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.623 [2024-10-07 07:44:14.021174] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.623 [2024-10-07 07:44:14.021236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.623 [2024-10-07 07:44:14.021249] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.623 [2024-10-07 07:44:14.021266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.623 [2024-10-07 07:44:14.021274] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.623 [2024-10-07 07:44:14.021287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.623 [2024-10-07 07:44:14.021296] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:14.623 [2024-10-07 07:44:14.021309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:14.623 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.624 "name": "Existed_Raid", 00:23:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.624 "strip_size_kb": 0, 00:23:14.624 "state": "configuring", 00:23:14.624 "raid_level": "raid1", 00:23:14.624 "superblock": false, 00:23:14.624 "num_base_bdevs": 4, 00:23:14.624 "num_base_bdevs_discovered": 0, 00:23:14.624 "num_base_bdevs_operational": 4, 00:23:14.624 "base_bdevs_list": [ 00:23:14.624 { 00:23:14.624 "name": "BaseBdev1", 00:23:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.624 "is_configured": false, 00:23:14.624 "data_offset": 0, 00:23:14.624 "data_size": 0 00:23:14.624 }, 00:23:14.624 { 00:23:14.624 "name": "BaseBdev2", 00:23:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.624 "is_configured": false, 00:23:14.624 "data_offset": 0, 00:23:14.624 "data_size": 0 00:23:14.624 }, 00:23:14.624 { 00:23:14.624 "name": "BaseBdev3", 00:23:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.624 "is_configured": false, 00:23:14.624 "data_offset": 0, 00:23:14.624 "data_size": 0 00:23:14.624 }, 00:23:14.624 { 00:23:14.624 "name": "BaseBdev4", 00:23:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.624 "is_configured": false, 00:23:14.624 "data_offset": 0, 00:23:14.624 "data_size": 0 00:23:14.624 } 00:23:14.624 ] 00:23:14.624 }' 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.624 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.882 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:14.882 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:14.882 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.142 [2024-10-07 07:44:14.441227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.142 [2024-10-07 07:44:14.441274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.142 [2024-10-07 07:44:14.449231] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.142 [2024-10-07 07:44:14.449409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.142 [2024-10-07 07:44:14.449433] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.142 [2024-10-07 07:44:14.449449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.142 [2024-10-07 07:44:14.449458] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.142 [2024-10-07 07:44:14.449472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.142 [2024-10-07 07:44:14.449481] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.142 [2024-10-07 07:44:14.449495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.142 [2024-10-07 07:44:14.509180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.142 BaseBdev1 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.142 [ 00:23:15.142 { 00:23:15.142 "name": "BaseBdev1", 00:23:15.142 "aliases": [ 00:23:15.142 "c6eb11b5-89ee-4aea-a63b-cad4fd453030" 00:23:15.142 ], 00:23:15.142 "product_name": "Malloc disk", 00:23:15.142 "block_size": 512, 00:23:15.142 "num_blocks": 65536, 00:23:15.142 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:15.142 "assigned_rate_limits": { 00:23:15.142 "rw_ios_per_sec": 0, 00:23:15.142 "rw_mbytes_per_sec": 0, 00:23:15.142 "r_mbytes_per_sec": 0, 00:23:15.142 "w_mbytes_per_sec": 0 00:23:15.142 }, 00:23:15.142 "claimed": true, 00:23:15.142 "claim_type": "exclusive_write", 00:23:15.142 "zoned": false, 00:23:15.142 "supported_io_types": { 00:23:15.142 "read": true, 00:23:15.142 "write": true, 00:23:15.142 "unmap": true, 00:23:15.142 "flush": true, 00:23:15.142 "reset": true, 00:23:15.142 "nvme_admin": false, 00:23:15.142 "nvme_io": false, 00:23:15.142 "nvme_io_md": false, 00:23:15.142 "write_zeroes": true, 00:23:15.142 "zcopy": true, 00:23:15.142 "get_zone_info": false, 00:23:15.142 "zone_management": false, 00:23:15.142 "zone_append": false, 00:23:15.142 "compare": false, 00:23:15.142 "compare_and_write": false, 00:23:15.142 "abort": true, 00:23:15.142 "seek_hole": false, 00:23:15.142 "seek_data": false, 00:23:15.142 "copy": true, 00:23:15.142 "nvme_iov_md": false 00:23:15.142 }, 00:23:15.142 "memory_domains": [ 00:23:15.142 { 00:23:15.142 "dma_device_id": "system", 00:23:15.142 "dma_device_type": 1 00:23:15.142 }, 00:23:15.142 { 00:23:15.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.142 "dma_device_type": 2 00:23:15.142 } 00:23:15.142 ], 00:23:15.142 "driver_specific": {} 00:23:15.142 } 00:23:15.142 ] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.142 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.143 "name": "Existed_Raid", 00:23:15.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.143 "strip_size_kb": 0, 00:23:15.143 "state": "configuring", 00:23:15.143 "raid_level": "raid1", 00:23:15.143 "superblock": false, 00:23:15.143 "num_base_bdevs": 4, 00:23:15.143 "num_base_bdevs_discovered": 1, 00:23:15.143 "num_base_bdevs_operational": 4, 00:23:15.143 "base_bdevs_list": [ 00:23:15.143 { 00:23:15.143 "name": "BaseBdev1", 00:23:15.143 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:15.143 "is_configured": true, 00:23:15.143 "data_offset": 0, 00:23:15.143 "data_size": 65536 00:23:15.143 }, 00:23:15.143 { 00:23:15.143 "name": "BaseBdev2", 00:23:15.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.143 "is_configured": false, 00:23:15.143 "data_offset": 0, 00:23:15.143 "data_size": 0 00:23:15.143 }, 00:23:15.143 { 00:23:15.143 "name": "BaseBdev3", 00:23:15.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.143 "is_configured": false, 00:23:15.143 "data_offset": 0, 00:23:15.143 "data_size": 0 00:23:15.143 }, 00:23:15.143 { 00:23:15.143 "name": "BaseBdev4", 00:23:15.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.143 "is_configured": false, 00:23:15.143 "data_offset": 0, 00:23:15.143 "data_size": 0 00:23:15.143 } 00:23:15.143 ] 00:23:15.143 }' 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.143 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.712 07:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:15.712 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.712 07:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.712 [2024-10-07 07:44:15.001360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.712 [2024-10-07 07:44:15.001419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.712 [2024-10-07 07:44:15.009381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.712 [2024-10-07 07:44:15.011698] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.712 [2024-10-07 07:44:15.011873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.712 [2024-10-07 07:44:15.011966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.712 [2024-10-07 07:44:15.012018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.712 [2024-10-07 07:44:15.012205] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.712 [2024-10-07 07:44:15.012251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.712 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.712 "name": "Existed_Raid", 00:23:15.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.712 "strip_size_kb": 0, 00:23:15.712 "state": "configuring", 00:23:15.712 "raid_level": "raid1", 00:23:15.712 "superblock": false, 00:23:15.712 "num_base_bdevs": 4, 00:23:15.712 "num_base_bdevs_discovered": 1, 00:23:15.712 "num_base_bdevs_operational": 4, 00:23:15.712 "base_bdevs_list": [ 00:23:15.712 { 00:23:15.712 "name": "BaseBdev1", 00:23:15.712 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:15.712 "is_configured": true, 00:23:15.713 "data_offset": 0, 00:23:15.713 "data_size": 65536 00:23:15.713 }, 00:23:15.713 { 00:23:15.713 "name": "BaseBdev2", 00:23:15.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.713 "is_configured": false, 00:23:15.713 "data_offset": 0, 00:23:15.713 "data_size": 0 00:23:15.713 }, 00:23:15.713 { 00:23:15.713 "name": "BaseBdev3", 00:23:15.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.713 "is_configured": false, 00:23:15.713 "data_offset": 0, 00:23:15.713 "data_size": 0 00:23:15.713 }, 00:23:15.713 { 00:23:15.713 "name": "BaseBdev4", 00:23:15.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.713 "is_configured": false, 00:23:15.713 "data_offset": 0, 00:23:15.713 "data_size": 0 00:23:15.713 } 00:23:15.713 ] 00:23:15.713 }' 00:23:15.713 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.713 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.971 [2024-10-07 07:44:15.519952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:15.971 BaseBdev2 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:15.971 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 [ 00:23:16.230 { 00:23:16.230 "name": "BaseBdev2", 00:23:16.230 "aliases": [ 00:23:16.230 "a601043a-9288-4c98-a871-8136e8a3b98e" 00:23:16.230 ], 00:23:16.230 "product_name": "Malloc disk", 00:23:16.230 "block_size": 512, 00:23:16.230 "num_blocks": 65536, 00:23:16.230 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:16.230 "assigned_rate_limits": { 00:23:16.230 "rw_ios_per_sec": 0, 00:23:16.230 "rw_mbytes_per_sec": 0, 00:23:16.230 "r_mbytes_per_sec": 0, 00:23:16.230 "w_mbytes_per_sec": 0 00:23:16.230 }, 00:23:16.230 "claimed": true, 00:23:16.230 "claim_type": "exclusive_write", 00:23:16.230 "zoned": false, 00:23:16.230 "supported_io_types": { 00:23:16.230 "read": true, 00:23:16.230 "write": true, 00:23:16.230 "unmap": true, 00:23:16.230 "flush": true, 00:23:16.230 "reset": true, 00:23:16.230 "nvme_admin": false, 00:23:16.230 "nvme_io": false, 00:23:16.230 "nvme_io_md": false, 00:23:16.230 "write_zeroes": true, 00:23:16.230 "zcopy": true, 00:23:16.230 "get_zone_info": false, 00:23:16.230 "zone_management": false, 00:23:16.230 "zone_append": false, 00:23:16.230 "compare": false, 00:23:16.230 "compare_and_write": false, 00:23:16.230 "abort": true, 00:23:16.230 "seek_hole": false, 00:23:16.230 "seek_data": false, 00:23:16.230 "copy": true, 00:23:16.230 "nvme_iov_md": false 00:23:16.230 }, 00:23:16.230 "memory_domains": [ 00:23:16.230 { 00:23:16.230 "dma_device_id": "system", 00:23:16.230 "dma_device_type": 1 00:23:16.230 }, 00:23:16.230 { 00:23:16.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.230 "dma_device_type": 2 00:23:16.230 } 00:23:16.230 ], 00:23:16.230 "driver_specific": {} 00:23:16.230 } 00:23:16.230 ] 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.230 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.230 "name": "Existed_Raid", 00:23:16.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.230 "strip_size_kb": 0, 00:23:16.230 "state": "configuring", 00:23:16.230 "raid_level": "raid1", 00:23:16.230 "superblock": false, 00:23:16.230 "num_base_bdevs": 4, 00:23:16.230 "num_base_bdevs_discovered": 2, 00:23:16.230 "num_base_bdevs_operational": 4, 00:23:16.230 "base_bdevs_list": [ 00:23:16.230 { 00:23:16.230 "name": "BaseBdev1", 00:23:16.231 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:16.231 "is_configured": true, 00:23:16.231 "data_offset": 0, 00:23:16.231 "data_size": 65536 00:23:16.231 }, 00:23:16.231 { 00:23:16.231 "name": "BaseBdev2", 00:23:16.231 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:16.231 "is_configured": true, 00:23:16.231 "data_offset": 0, 00:23:16.231 "data_size": 65536 00:23:16.231 }, 00:23:16.231 { 00:23:16.231 "name": "BaseBdev3", 00:23:16.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.231 "is_configured": false, 00:23:16.231 "data_offset": 0, 00:23:16.231 "data_size": 0 00:23:16.231 }, 00:23:16.231 { 00:23:16.231 "name": "BaseBdev4", 00:23:16.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.231 "is_configured": false, 00:23:16.231 "data_offset": 0, 00:23:16.231 "data_size": 0 00:23:16.231 } 00:23:16.231 ] 00:23:16.231 }' 00:23:16.231 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.231 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.490 07:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:16.490 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.490 07:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.490 [2024-10-07 07:44:16.037517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:16.490 BaseBdev3 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.490 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.749 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.749 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:16.749 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.749 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.749 [ 00:23:16.749 { 00:23:16.749 "name": "BaseBdev3", 00:23:16.749 "aliases": [ 00:23:16.749 "d5dd81e5-6c3d-4f1f-9782-af2de991038c" 00:23:16.749 ], 00:23:16.749 "product_name": "Malloc disk", 00:23:16.749 "block_size": 512, 00:23:16.749 "num_blocks": 65536, 00:23:16.749 "uuid": "d5dd81e5-6c3d-4f1f-9782-af2de991038c", 00:23:16.749 "assigned_rate_limits": { 00:23:16.749 "rw_ios_per_sec": 0, 00:23:16.749 "rw_mbytes_per_sec": 0, 00:23:16.749 "r_mbytes_per_sec": 0, 00:23:16.749 "w_mbytes_per_sec": 0 00:23:16.749 }, 00:23:16.749 "claimed": true, 00:23:16.749 "claim_type": "exclusive_write", 00:23:16.749 "zoned": false, 00:23:16.749 "supported_io_types": { 00:23:16.749 "read": true, 00:23:16.750 "write": true, 00:23:16.750 "unmap": true, 00:23:16.750 "flush": true, 00:23:16.750 "reset": true, 00:23:16.750 "nvme_admin": false, 00:23:16.750 "nvme_io": false, 00:23:16.750 "nvme_io_md": false, 00:23:16.750 "write_zeroes": true, 00:23:16.750 "zcopy": true, 00:23:16.750 "get_zone_info": false, 00:23:16.750 "zone_management": false, 00:23:16.750 "zone_append": false, 00:23:16.750 "compare": false, 00:23:16.750 "compare_and_write": false, 00:23:16.750 "abort": true, 00:23:16.750 "seek_hole": false, 00:23:16.750 "seek_data": false, 00:23:16.750 "copy": true, 00:23:16.750 "nvme_iov_md": false 00:23:16.750 }, 00:23:16.750 "memory_domains": [ 00:23:16.750 { 00:23:16.750 "dma_device_id": "system", 00:23:16.750 "dma_device_type": 1 00:23:16.750 }, 00:23:16.750 { 00:23:16.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.750 "dma_device_type": 2 00:23:16.750 } 00:23:16.750 ], 00:23:16.750 "driver_specific": {} 00:23:16.750 } 00:23:16.750 ] 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.750 "name": "Existed_Raid", 00:23:16.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.750 "strip_size_kb": 0, 00:23:16.750 "state": "configuring", 00:23:16.750 "raid_level": "raid1", 00:23:16.750 "superblock": false, 00:23:16.750 "num_base_bdevs": 4, 00:23:16.750 "num_base_bdevs_discovered": 3, 00:23:16.750 "num_base_bdevs_operational": 4, 00:23:16.750 "base_bdevs_list": [ 00:23:16.750 { 00:23:16.750 "name": "BaseBdev1", 00:23:16.750 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:16.750 "is_configured": true, 00:23:16.750 "data_offset": 0, 00:23:16.750 "data_size": 65536 00:23:16.750 }, 00:23:16.750 { 00:23:16.750 "name": "BaseBdev2", 00:23:16.750 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:16.750 "is_configured": true, 00:23:16.750 "data_offset": 0, 00:23:16.750 "data_size": 65536 00:23:16.750 }, 00:23:16.750 { 00:23:16.750 "name": "BaseBdev3", 00:23:16.750 "uuid": "d5dd81e5-6c3d-4f1f-9782-af2de991038c", 00:23:16.750 "is_configured": true, 00:23:16.750 "data_offset": 0, 00:23:16.750 "data_size": 65536 00:23:16.750 }, 00:23:16.750 { 00:23:16.750 "name": "BaseBdev4", 00:23:16.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.750 "is_configured": false, 00:23:16.750 "data_offset": 0, 00:23:16.750 "data_size": 0 00:23:16.750 } 00:23:16.750 ] 00:23:16.750 }' 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.750 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.009 [2024-10-07 07:44:16.555771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:17.009 [2024-10-07 07:44:16.556031] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:17.009 [2024-10-07 07:44:16.556056] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:17.009 [2024-10-07 07:44:16.556393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:17.009 [2024-10-07 07:44:16.556603] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:17.009 [2024-10-07 07:44:16.556621] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:17.009 BaseBdev4 00:23:17.009 [2024-10-07 07:44:16.556933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:23:17.009 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.010 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.269 [ 00:23:17.269 { 00:23:17.269 "name": "BaseBdev4", 00:23:17.269 "aliases": [ 00:23:17.269 "9cc8acc4-19a6-4bf9-9043-3393cb7cad7c" 00:23:17.269 ], 00:23:17.269 "product_name": "Malloc disk", 00:23:17.269 "block_size": 512, 00:23:17.269 "num_blocks": 65536, 00:23:17.269 "uuid": "9cc8acc4-19a6-4bf9-9043-3393cb7cad7c", 00:23:17.269 "assigned_rate_limits": { 00:23:17.269 "rw_ios_per_sec": 0, 00:23:17.269 "rw_mbytes_per_sec": 0, 00:23:17.269 "r_mbytes_per_sec": 0, 00:23:17.269 "w_mbytes_per_sec": 0 00:23:17.269 }, 00:23:17.269 "claimed": true, 00:23:17.269 "claim_type": "exclusive_write", 00:23:17.269 "zoned": false, 00:23:17.269 "supported_io_types": { 00:23:17.269 "read": true, 00:23:17.269 "write": true, 00:23:17.269 "unmap": true, 00:23:17.269 "flush": true, 00:23:17.269 "reset": true, 00:23:17.269 "nvme_admin": false, 00:23:17.269 "nvme_io": false, 00:23:17.269 "nvme_io_md": false, 00:23:17.269 "write_zeroes": true, 00:23:17.269 "zcopy": true, 00:23:17.269 "get_zone_info": false, 00:23:17.269 "zone_management": false, 00:23:17.269 "zone_append": false, 00:23:17.269 "compare": false, 00:23:17.269 "compare_and_write": false, 00:23:17.269 "abort": true, 00:23:17.269 "seek_hole": false, 00:23:17.269 "seek_data": false, 00:23:17.269 "copy": true, 00:23:17.269 "nvme_iov_md": false 00:23:17.269 }, 00:23:17.269 "memory_domains": [ 00:23:17.269 { 00:23:17.269 "dma_device_id": "system", 00:23:17.269 "dma_device_type": 1 00:23:17.269 }, 00:23:17.269 { 00:23:17.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.269 "dma_device_type": 2 00:23:17.269 } 00:23:17.269 ], 00:23:17.269 "driver_specific": {} 00:23:17.269 } 00:23:17.269 ] 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.269 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.270 "name": "Existed_Raid", 00:23:17.270 "uuid": "80bbb308-88e2-4645-9dfe-56ef174f024c", 00:23:17.270 "strip_size_kb": 0, 00:23:17.270 "state": "online", 00:23:17.270 "raid_level": "raid1", 00:23:17.270 "superblock": false, 00:23:17.270 "num_base_bdevs": 4, 00:23:17.270 "num_base_bdevs_discovered": 4, 00:23:17.270 "num_base_bdevs_operational": 4, 00:23:17.270 "base_bdevs_list": [ 00:23:17.270 { 00:23:17.270 "name": "BaseBdev1", 00:23:17.270 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:17.270 "is_configured": true, 00:23:17.270 "data_offset": 0, 00:23:17.270 "data_size": 65536 00:23:17.270 }, 00:23:17.270 { 00:23:17.270 "name": "BaseBdev2", 00:23:17.270 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:17.270 "is_configured": true, 00:23:17.270 "data_offset": 0, 00:23:17.270 "data_size": 65536 00:23:17.270 }, 00:23:17.270 { 00:23:17.270 "name": "BaseBdev3", 00:23:17.270 "uuid": "d5dd81e5-6c3d-4f1f-9782-af2de991038c", 00:23:17.270 "is_configured": true, 00:23:17.270 "data_offset": 0, 00:23:17.270 "data_size": 65536 00:23:17.270 }, 00:23:17.270 { 00:23:17.270 "name": "BaseBdev4", 00:23:17.270 "uuid": "9cc8acc4-19a6-4bf9-9043-3393cb7cad7c", 00:23:17.270 "is_configured": true, 00:23:17.270 "data_offset": 0, 00:23:17.270 "data_size": 65536 00:23:17.270 } 00:23:17.270 ] 00:23:17.270 }' 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.270 07:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.529 [2024-10-07 07:44:17.028332] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.529 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.529 "name": "Existed_Raid", 00:23:17.529 "aliases": [ 00:23:17.529 "80bbb308-88e2-4645-9dfe-56ef174f024c" 00:23:17.529 ], 00:23:17.529 "product_name": "Raid Volume", 00:23:17.529 "block_size": 512, 00:23:17.529 "num_blocks": 65536, 00:23:17.529 "uuid": "80bbb308-88e2-4645-9dfe-56ef174f024c", 00:23:17.529 "assigned_rate_limits": { 00:23:17.529 "rw_ios_per_sec": 0, 00:23:17.529 "rw_mbytes_per_sec": 0, 00:23:17.529 "r_mbytes_per_sec": 0, 00:23:17.529 "w_mbytes_per_sec": 0 00:23:17.529 }, 00:23:17.529 "claimed": false, 00:23:17.529 "zoned": false, 00:23:17.529 "supported_io_types": { 00:23:17.529 "read": true, 00:23:17.529 "write": true, 00:23:17.529 "unmap": false, 00:23:17.529 "flush": false, 00:23:17.529 "reset": true, 00:23:17.529 "nvme_admin": false, 00:23:17.529 "nvme_io": false, 00:23:17.529 "nvme_io_md": false, 00:23:17.529 "write_zeroes": true, 00:23:17.529 "zcopy": false, 00:23:17.529 "get_zone_info": false, 00:23:17.529 "zone_management": false, 00:23:17.529 "zone_append": false, 00:23:17.529 "compare": false, 00:23:17.529 "compare_and_write": false, 00:23:17.529 "abort": false, 00:23:17.529 "seek_hole": false, 00:23:17.529 "seek_data": false, 00:23:17.529 "copy": false, 00:23:17.529 "nvme_iov_md": false 00:23:17.529 }, 00:23:17.529 "memory_domains": [ 00:23:17.529 { 00:23:17.529 "dma_device_id": "system", 00:23:17.529 "dma_device_type": 1 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.529 "dma_device_type": 2 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "system", 00:23:17.529 "dma_device_type": 1 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.529 "dma_device_type": 2 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "system", 00:23:17.529 "dma_device_type": 1 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.529 "dma_device_type": 2 00:23:17.529 }, 00:23:17.529 { 00:23:17.529 "dma_device_id": "system", 00:23:17.529 "dma_device_type": 1 00:23:17.529 }, 00:23:17.530 { 00:23:17.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.530 "dma_device_type": 2 00:23:17.530 } 00:23:17.530 ], 00:23:17.530 "driver_specific": { 00:23:17.530 "raid": { 00:23:17.530 "uuid": "80bbb308-88e2-4645-9dfe-56ef174f024c", 00:23:17.530 "strip_size_kb": 0, 00:23:17.530 "state": "online", 00:23:17.530 "raid_level": "raid1", 00:23:17.530 "superblock": false, 00:23:17.530 "num_base_bdevs": 4, 00:23:17.530 "num_base_bdevs_discovered": 4, 00:23:17.530 "num_base_bdevs_operational": 4, 00:23:17.530 "base_bdevs_list": [ 00:23:17.530 { 00:23:17.530 "name": "BaseBdev1", 00:23:17.530 "uuid": "c6eb11b5-89ee-4aea-a63b-cad4fd453030", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 0, 00:23:17.530 "data_size": 65536 00:23:17.530 }, 00:23:17.530 { 00:23:17.530 "name": "BaseBdev2", 00:23:17.530 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 0, 00:23:17.530 "data_size": 65536 00:23:17.530 }, 00:23:17.530 { 00:23:17.530 "name": "BaseBdev3", 00:23:17.530 "uuid": "d5dd81e5-6c3d-4f1f-9782-af2de991038c", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 0, 00:23:17.530 "data_size": 65536 00:23:17.530 }, 00:23:17.530 { 00:23:17.530 "name": "BaseBdev4", 00:23:17.530 "uuid": "9cc8acc4-19a6-4bf9-9043-3393cb7cad7c", 00:23:17.530 "is_configured": true, 00:23:17.530 "data_offset": 0, 00:23:17.530 "data_size": 65536 00:23:17.530 } 00:23:17.530 ] 00:23:17.530 } 00:23:17.530 } 00:23:17.530 }' 00:23:17.530 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:17.788 BaseBdev2 00:23:17.788 BaseBdev3 00:23:17.788 BaseBdev4' 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:17.788 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:17.789 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.789 [2024-10-07 07:44:17.348115] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.048 "name": "Existed_Raid", 00:23:18.048 "uuid": "80bbb308-88e2-4645-9dfe-56ef174f024c", 00:23:18.048 "strip_size_kb": 0, 00:23:18.048 "state": "online", 00:23:18.048 "raid_level": "raid1", 00:23:18.048 "superblock": false, 00:23:18.048 "num_base_bdevs": 4, 00:23:18.048 "num_base_bdevs_discovered": 3, 00:23:18.048 "num_base_bdevs_operational": 3, 00:23:18.048 "base_bdevs_list": [ 00:23:18.048 { 00:23:18.048 "name": null, 00:23:18.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.048 "is_configured": false, 00:23:18.048 "data_offset": 0, 00:23:18.048 "data_size": 65536 00:23:18.048 }, 00:23:18.048 { 00:23:18.048 "name": "BaseBdev2", 00:23:18.048 "uuid": "a601043a-9288-4c98-a871-8136e8a3b98e", 00:23:18.048 "is_configured": true, 00:23:18.048 "data_offset": 0, 00:23:18.048 "data_size": 65536 00:23:18.048 }, 00:23:18.048 { 00:23:18.048 "name": "BaseBdev3", 00:23:18.048 "uuid": "d5dd81e5-6c3d-4f1f-9782-af2de991038c", 00:23:18.048 "is_configured": true, 00:23:18.048 "data_offset": 0, 00:23:18.048 "data_size": 65536 00:23:18.048 }, 00:23:18.048 { 00:23:18.048 "name": "BaseBdev4", 00:23:18.048 "uuid": "9cc8acc4-19a6-4bf9-9043-3393cb7cad7c", 00:23:18.048 "is_configured": true, 00:23:18.048 "data_offset": 0, 00:23:18.048 "data_size": 65536 00:23:18.048 } 00:23:18.048 ] 00:23:18.048 }' 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.048 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.614 07:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.614 [2024-10-07 07:44:17.921666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.614 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.615 [2024-10-07 07:44:18.072537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.615 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.872 [2024-10-07 07:44:18.224037] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:18.872 [2024-10-07 07:44:18.224255] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.872 [2024-10-07 07:44:18.324656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.872 [2024-10-07 07:44:18.324936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.872 [2024-10-07 07:44:18.324968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.872 BaseBdev2 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:18.872 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 [ 00:23:19.131 { 00:23:19.131 "name": "BaseBdev2", 00:23:19.131 "aliases": [ 00:23:19.131 "2b4d85d8-5079-4a5a-bcd7-240d2353050d" 00:23:19.131 ], 00:23:19.131 "product_name": "Malloc disk", 00:23:19.131 "block_size": 512, 00:23:19.131 "num_blocks": 65536, 00:23:19.131 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:19.131 "assigned_rate_limits": { 00:23:19.131 "rw_ios_per_sec": 0, 00:23:19.131 "rw_mbytes_per_sec": 0, 00:23:19.131 "r_mbytes_per_sec": 0, 00:23:19.131 "w_mbytes_per_sec": 0 00:23:19.131 }, 00:23:19.131 "claimed": false, 00:23:19.131 "zoned": false, 00:23:19.131 "supported_io_types": { 00:23:19.131 "read": true, 00:23:19.131 "write": true, 00:23:19.131 "unmap": true, 00:23:19.131 "flush": true, 00:23:19.131 "reset": true, 00:23:19.131 "nvme_admin": false, 00:23:19.131 "nvme_io": false, 00:23:19.131 "nvme_io_md": false, 00:23:19.131 "write_zeroes": true, 00:23:19.131 "zcopy": true, 00:23:19.131 "get_zone_info": false, 00:23:19.131 "zone_management": false, 00:23:19.131 "zone_append": false, 00:23:19.131 "compare": false, 00:23:19.131 "compare_and_write": false, 00:23:19.131 "abort": true, 00:23:19.131 "seek_hole": false, 00:23:19.131 "seek_data": false, 00:23:19.131 "copy": true, 00:23:19.131 "nvme_iov_md": false 00:23:19.131 }, 00:23:19.131 "memory_domains": [ 00:23:19.131 { 00:23:19.131 "dma_device_id": "system", 00:23:19.131 "dma_device_type": 1 00:23:19.131 }, 00:23:19.131 { 00:23:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.131 "dma_device_type": 2 00:23:19.131 } 00:23:19.131 ], 00:23:19.131 "driver_specific": {} 00:23:19.131 } 00:23:19.131 ] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 BaseBdev3 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 [ 00:23:19.131 { 00:23:19.131 "name": "BaseBdev3", 00:23:19.131 "aliases": [ 00:23:19.131 "44b28831-7107-497a-b518-b37f790cea3e" 00:23:19.131 ], 00:23:19.131 "product_name": "Malloc disk", 00:23:19.131 "block_size": 512, 00:23:19.131 "num_blocks": 65536, 00:23:19.131 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:19.131 "assigned_rate_limits": { 00:23:19.131 "rw_ios_per_sec": 0, 00:23:19.131 "rw_mbytes_per_sec": 0, 00:23:19.131 "r_mbytes_per_sec": 0, 00:23:19.131 "w_mbytes_per_sec": 0 00:23:19.131 }, 00:23:19.131 "claimed": false, 00:23:19.131 "zoned": false, 00:23:19.131 "supported_io_types": { 00:23:19.131 "read": true, 00:23:19.131 "write": true, 00:23:19.131 "unmap": true, 00:23:19.131 "flush": true, 00:23:19.131 "reset": true, 00:23:19.131 "nvme_admin": false, 00:23:19.131 "nvme_io": false, 00:23:19.131 "nvme_io_md": false, 00:23:19.131 "write_zeroes": true, 00:23:19.131 "zcopy": true, 00:23:19.131 "get_zone_info": false, 00:23:19.131 "zone_management": false, 00:23:19.131 "zone_append": false, 00:23:19.131 "compare": false, 00:23:19.131 "compare_and_write": false, 00:23:19.131 "abort": true, 00:23:19.131 "seek_hole": false, 00:23:19.131 "seek_data": false, 00:23:19.131 "copy": true, 00:23:19.131 "nvme_iov_md": false 00:23:19.131 }, 00:23:19.131 "memory_domains": [ 00:23:19.131 { 00:23:19.131 "dma_device_id": "system", 00:23:19.131 "dma_device_type": 1 00:23:19.131 }, 00:23:19.131 { 00:23:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.131 "dma_device_type": 2 00:23:19.131 } 00:23:19.131 ], 00:23:19.131 "driver_specific": {} 00:23:19.131 } 00:23:19.131 ] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 BaseBdev4 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 [ 00:23:19.131 { 00:23:19.131 "name": "BaseBdev4", 00:23:19.131 "aliases": [ 00:23:19.131 "5ed4a871-dfe6-4788-8f60-cae9074a70ea" 00:23:19.131 ], 00:23:19.131 "product_name": "Malloc disk", 00:23:19.131 "block_size": 512, 00:23:19.131 "num_blocks": 65536, 00:23:19.131 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:19.131 "assigned_rate_limits": { 00:23:19.131 "rw_ios_per_sec": 0, 00:23:19.131 "rw_mbytes_per_sec": 0, 00:23:19.131 "r_mbytes_per_sec": 0, 00:23:19.131 "w_mbytes_per_sec": 0 00:23:19.131 }, 00:23:19.131 "claimed": false, 00:23:19.131 "zoned": false, 00:23:19.131 "supported_io_types": { 00:23:19.131 "read": true, 00:23:19.131 "write": true, 00:23:19.131 "unmap": true, 00:23:19.131 "flush": true, 00:23:19.131 "reset": true, 00:23:19.131 "nvme_admin": false, 00:23:19.131 "nvme_io": false, 00:23:19.131 "nvme_io_md": false, 00:23:19.131 "write_zeroes": true, 00:23:19.131 "zcopy": true, 00:23:19.131 "get_zone_info": false, 00:23:19.131 "zone_management": false, 00:23:19.131 "zone_append": false, 00:23:19.131 "compare": false, 00:23:19.131 "compare_and_write": false, 00:23:19.131 "abort": true, 00:23:19.131 "seek_hole": false, 00:23:19.131 "seek_data": false, 00:23:19.131 "copy": true, 00:23:19.131 "nvme_iov_md": false 00:23:19.131 }, 00:23:19.131 "memory_domains": [ 00:23:19.131 { 00:23:19.131 "dma_device_id": "system", 00:23:19.131 "dma_device_type": 1 00:23:19.131 }, 00:23:19.131 { 00:23:19.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.131 "dma_device_type": 2 00:23:19.131 } 00:23:19.131 ], 00:23:19.131 "driver_specific": {} 00:23:19.131 } 00:23:19.131 ] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:19.131 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.132 [2024-10-07 07:44:18.611927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.132 [2024-10-07 07:44:18.612117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.132 [2024-10-07 07:44:18.612225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.132 [2024-10-07 07:44:18.614879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:19.132 [2024-10-07 07:44:18.615073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.132 "name": "Existed_Raid", 00:23:19.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.132 "strip_size_kb": 0, 00:23:19.132 "state": "configuring", 00:23:19.132 "raid_level": "raid1", 00:23:19.132 "superblock": false, 00:23:19.132 "num_base_bdevs": 4, 00:23:19.132 "num_base_bdevs_discovered": 3, 00:23:19.132 "num_base_bdevs_operational": 4, 00:23:19.132 "base_bdevs_list": [ 00:23:19.132 { 00:23:19.132 "name": "BaseBdev1", 00:23:19.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.132 "is_configured": false, 00:23:19.132 "data_offset": 0, 00:23:19.132 "data_size": 0 00:23:19.132 }, 00:23:19.132 { 00:23:19.132 "name": "BaseBdev2", 00:23:19.132 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:19.132 "is_configured": true, 00:23:19.132 "data_offset": 0, 00:23:19.132 "data_size": 65536 00:23:19.132 }, 00:23:19.132 { 00:23:19.132 "name": "BaseBdev3", 00:23:19.132 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:19.132 "is_configured": true, 00:23:19.132 "data_offset": 0, 00:23:19.132 "data_size": 65536 00:23:19.132 }, 00:23:19.132 { 00:23:19.132 "name": "BaseBdev4", 00:23:19.132 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:19.132 "is_configured": true, 00:23:19.132 "data_offset": 0, 00:23:19.132 "data_size": 65536 00:23:19.132 } 00:23:19.132 ] 00:23:19.132 }' 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.132 07:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.698 [2024-10-07 07:44:19.076086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.698 "name": "Existed_Raid", 00:23:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.698 "strip_size_kb": 0, 00:23:19.698 "state": "configuring", 00:23:19.698 "raid_level": "raid1", 00:23:19.698 "superblock": false, 00:23:19.698 "num_base_bdevs": 4, 00:23:19.698 "num_base_bdevs_discovered": 2, 00:23:19.698 "num_base_bdevs_operational": 4, 00:23:19.698 "base_bdevs_list": [ 00:23:19.698 { 00:23:19.698 "name": "BaseBdev1", 00:23:19.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.698 "is_configured": false, 00:23:19.698 "data_offset": 0, 00:23:19.698 "data_size": 0 00:23:19.698 }, 00:23:19.698 { 00:23:19.698 "name": null, 00:23:19.698 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:19.698 "is_configured": false, 00:23:19.698 "data_offset": 0, 00:23:19.698 "data_size": 65536 00:23:19.698 }, 00:23:19.698 { 00:23:19.698 "name": "BaseBdev3", 00:23:19.698 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:19.698 "is_configured": true, 00:23:19.698 "data_offset": 0, 00:23:19.698 "data_size": 65536 00:23:19.698 }, 00:23:19.698 { 00:23:19.698 "name": "BaseBdev4", 00:23:19.698 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:19.698 "is_configured": true, 00:23:19.698 "data_offset": 0, 00:23:19.698 "data_size": 65536 00:23:19.698 } 00:23:19.698 ] 00:23:19.698 }' 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.698 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 [2024-10-07 07:44:19.633071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.265 BaseBdev1 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 [ 00:23:20.265 { 00:23:20.265 "name": "BaseBdev1", 00:23:20.265 "aliases": [ 00:23:20.265 "1e2788e7-dadb-48de-a233-1014c5e7541c" 00:23:20.265 ], 00:23:20.265 "product_name": "Malloc disk", 00:23:20.265 "block_size": 512, 00:23:20.265 "num_blocks": 65536, 00:23:20.265 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:20.265 "assigned_rate_limits": { 00:23:20.265 "rw_ios_per_sec": 0, 00:23:20.265 "rw_mbytes_per_sec": 0, 00:23:20.265 "r_mbytes_per_sec": 0, 00:23:20.265 "w_mbytes_per_sec": 0 00:23:20.265 }, 00:23:20.265 "claimed": true, 00:23:20.265 "claim_type": "exclusive_write", 00:23:20.265 "zoned": false, 00:23:20.265 "supported_io_types": { 00:23:20.265 "read": true, 00:23:20.265 "write": true, 00:23:20.265 "unmap": true, 00:23:20.265 "flush": true, 00:23:20.265 "reset": true, 00:23:20.265 "nvme_admin": false, 00:23:20.265 "nvme_io": false, 00:23:20.265 "nvme_io_md": false, 00:23:20.265 "write_zeroes": true, 00:23:20.265 "zcopy": true, 00:23:20.265 "get_zone_info": false, 00:23:20.265 "zone_management": false, 00:23:20.265 "zone_append": false, 00:23:20.265 "compare": false, 00:23:20.265 "compare_and_write": false, 00:23:20.265 "abort": true, 00:23:20.265 "seek_hole": false, 00:23:20.265 "seek_data": false, 00:23:20.265 "copy": true, 00:23:20.265 "nvme_iov_md": false 00:23:20.265 }, 00:23:20.265 "memory_domains": [ 00:23:20.265 { 00:23:20.265 "dma_device_id": "system", 00:23:20.265 "dma_device_type": 1 00:23:20.265 }, 00:23:20.265 { 00:23:20.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.265 "dma_device_type": 2 00:23:20.265 } 00:23:20.265 ], 00:23:20.265 "driver_specific": {} 00:23:20.265 } 00:23:20.265 ] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.265 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.265 "name": "Existed_Raid", 00:23:20.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.265 "strip_size_kb": 0, 00:23:20.265 "state": "configuring", 00:23:20.265 "raid_level": "raid1", 00:23:20.265 "superblock": false, 00:23:20.265 "num_base_bdevs": 4, 00:23:20.265 "num_base_bdevs_discovered": 3, 00:23:20.265 "num_base_bdevs_operational": 4, 00:23:20.265 "base_bdevs_list": [ 00:23:20.265 { 00:23:20.265 "name": "BaseBdev1", 00:23:20.265 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:20.265 "is_configured": true, 00:23:20.265 "data_offset": 0, 00:23:20.265 "data_size": 65536 00:23:20.265 }, 00:23:20.265 { 00:23:20.265 "name": null, 00:23:20.265 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:20.265 "is_configured": false, 00:23:20.265 "data_offset": 0, 00:23:20.265 "data_size": 65536 00:23:20.265 }, 00:23:20.265 { 00:23:20.265 "name": "BaseBdev3", 00:23:20.265 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:20.265 "is_configured": true, 00:23:20.265 "data_offset": 0, 00:23:20.265 "data_size": 65536 00:23:20.266 }, 00:23:20.266 { 00:23:20.266 "name": "BaseBdev4", 00:23:20.266 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:20.266 "is_configured": true, 00:23:20.266 "data_offset": 0, 00:23:20.266 "data_size": 65536 00:23:20.266 } 00:23:20.266 ] 00:23:20.266 }' 00:23:20.266 07:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.266 07:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.831 [2024-10-07 07:44:20.225364] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:20.831 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.832 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.832 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:20.832 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.832 "name": "Existed_Raid", 00:23:20.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.832 "strip_size_kb": 0, 00:23:20.832 "state": "configuring", 00:23:20.832 "raid_level": "raid1", 00:23:20.832 "superblock": false, 00:23:20.832 "num_base_bdevs": 4, 00:23:20.832 "num_base_bdevs_discovered": 2, 00:23:20.832 "num_base_bdevs_operational": 4, 00:23:20.832 "base_bdevs_list": [ 00:23:20.832 { 00:23:20.832 "name": "BaseBdev1", 00:23:20.832 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:20.832 "is_configured": true, 00:23:20.832 "data_offset": 0, 00:23:20.832 "data_size": 65536 00:23:20.832 }, 00:23:20.832 { 00:23:20.832 "name": null, 00:23:20.832 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:20.832 "is_configured": false, 00:23:20.832 "data_offset": 0, 00:23:20.832 "data_size": 65536 00:23:20.832 }, 00:23:20.832 { 00:23:20.832 "name": null, 00:23:20.832 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:20.832 "is_configured": false, 00:23:20.832 "data_offset": 0, 00:23:20.832 "data_size": 65536 00:23:20.832 }, 00:23:20.832 { 00:23:20.832 "name": "BaseBdev4", 00:23:20.832 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:20.832 "is_configured": true, 00:23:20.832 "data_offset": 0, 00:23:20.832 "data_size": 65536 00:23:20.832 } 00:23:20.832 ] 00:23:20.832 }' 00:23:20.832 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.832 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 [2024-10-07 07:44:20.749490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.408 "name": "Existed_Raid", 00:23:21.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.408 "strip_size_kb": 0, 00:23:21.408 "state": "configuring", 00:23:21.408 "raid_level": "raid1", 00:23:21.408 "superblock": false, 00:23:21.408 "num_base_bdevs": 4, 00:23:21.408 "num_base_bdevs_discovered": 3, 00:23:21.408 "num_base_bdevs_operational": 4, 00:23:21.408 "base_bdevs_list": [ 00:23:21.408 { 00:23:21.408 "name": "BaseBdev1", 00:23:21.408 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:21.408 "is_configured": true, 00:23:21.408 "data_offset": 0, 00:23:21.408 "data_size": 65536 00:23:21.408 }, 00:23:21.408 { 00:23:21.408 "name": null, 00:23:21.408 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:21.408 "is_configured": false, 00:23:21.408 "data_offset": 0, 00:23:21.408 "data_size": 65536 00:23:21.408 }, 00:23:21.408 { 00:23:21.408 "name": "BaseBdev3", 00:23:21.408 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:21.408 "is_configured": true, 00:23:21.408 "data_offset": 0, 00:23:21.408 "data_size": 65536 00:23:21.408 }, 00:23:21.408 { 00:23:21.408 "name": "BaseBdev4", 00:23:21.408 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:21.408 "is_configured": true, 00:23:21.408 "data_offset": 0, 00:23:21.408 "data_size": 65536 00:23:21.408 } 00:23:21.408 ] 00:23:21.408 }' 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.408 07:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.667 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.667 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.667 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.667 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.667 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.925 [2024-10-07 07:44:21.245663] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.925 "name": "Existed_Raid", 00:23:21.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.925 "strip_size_kb": 0, 00:23:21.925 "state": "configuring", 00:23:21.925 "raid_level": "raid1", 00:23:21.925 "superblock": false, 00:23:21.925 "num_base_bdevs": 4, 00:23:21.925 "num_base_bdevs_discovered": 2, 00:23:21.925 "num_base_bdevs_operational": 4, 00:23:21.925 "base_bdevs_list": [ 00:23:21.925 { 00:23:21.925 "name": null, 00:23:21.925 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:21.925 "is_configured": false, 00:23:21.925 "data_offset": 0, 00:23:21.925 "data_size": 65536 00:23:21.925 }, 00:23:21.925 { 00:23:21.925 "name": null, 00:23:21.925 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:21.925 "is_configured": false, 00:23:21.925 "data_offset": 0, 00:23:21.925 "data_size": 65536 00:23:21.925 }, 00:23:21.925 { 00:23:21.925 "name": "BaseBdev3", 00:23:21.925 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:21.925 "is_configured": true, 00:23:21.925 "data_offset": 0, 00:23:21.925 "data_size": 65536 00:23:21.925 }, 00:23:21.925 { 00:23:21.925 "name": "BaseBdev4", 00:23:21.925 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:21.925 "is_configured": true, 00:23:21.925 "data_offset": 0, 00:23:21.925 "data_size": 65536 00:23:21.925 } 00:23:21.925 ] 00:23:21.925 }' 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.925 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.490 [2024-10-07 07:44:21.828353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.490 "name": "Existed_Raid", 00:23:22.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.490 "strip_size_kb": 0, 00:23:22.490 "state": "configuring", 00:23:22.490 "raid_level": "raid1", 00:23:22.490 "superblock": false, 00:23:22.490 "num_base_bdevs": 4, 00:23:22.490 "num_base_bdevs_discovered": 3, 00:23:22.490 "num_base_bdevs_operational": 4, 00:23:22.490 "base_bdevs_list": [ 00:23:22.490 { 00:23:22.490 "name": null, 00:23:22.490 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:22.490 "is_configured": false, 00:23:22.490 "data_offset": 0, 00:23:22.490 "data_size": 65536 00:23:22.490 }, 00:23:22.490 { 00:23:22.490 "name": "BaseBdev2", 00:23:22.490 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:22.490 "is_configured": true, 00:23:22.490 "data_offset": 0, 00:23:22.490 "data_size": 65536 00:23:22.490 }, 00:23:22.490 { 00:23:22.490 "name": "BaseBdev3", 00:23:22.490 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:22.490 "is_configured": true, 00:23:22.490 "data_offset": 0, 00:23:22.490 "data_size": 65536 00:23:22.490 }, 00:23:22.490 { 00:23:22.490 "name": "BaseBdev4", 00:23:22.490 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:22.490 "is_configured": true, 00:23:22.490 "data_offset": 0, 00:23:22.490 "data_size": 65536 00:23:22.490 } 00:23:22.490 ] 00:23:22.490 }' 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.490 07:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.748 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:22.748 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.748 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:22.748 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.748 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e2788e7-dadb-48de-a233-1014c5e7541c 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.009 [2024-10-07 07:44:22.413837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:23.009 [2024-10-07 07:44:22.414094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:23.009 [2024-10-07 07:44:22.414124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:23.009 [2024-10-07 07:44:22.414461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:23.009 [2024-10-07 07:44:22.414620] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:23.009 [2024-10-07 07:44:22.414631] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:23.009 [2024-10-07 07:44:22.414952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.009 NewBaseBdev 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local i 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.009 [ 00:23:23.009 { 00:23:23.009 "name": "NewBaseBdev", 00:23:23.009 "aliases": [ 00:23:23.009 "1e2788e7-dadb-48de-a233-1014c5e7541c" 00:23:23.009 ], 00:23:23.009 "product_name": "Malloc disk", 00:23:23.009 "block_size": 512, 00:23:23.009 "num_blocks": 65536, 00:23:23.009 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:23.009 "assigned_rate_limits": { 00:23:23.009 "rw_ios_per_sec": 0, 00:23:23.009 "rw_mbytes_per_sec": 0, 00:23:23.009 "r_mbytes_per_sec": 0, 00:23:23.009 "w_mbytes_per_sec": 0 00:23:23.009 }, 00:23:23.009 "claimed": true, 00:23:23.009 "claim_type": "exclusive_write", 00:23:23.009 "zoned": false, 00:23:23.009 "supported_io_types": { 00:23:23.009 "read": true, 00:23:23.009 "write": true, 00:23:23.009 "unmap": true, 00:23:23.009 "flush": true, 00:23:23.009 "reset": true, 00:23:23.009 "nvme_admin": false, 00:23:23.009 "nvme_io": false, 00:23:23.009 "nvme_io_md": false, 00:23:23.009 "write_zeroes": true, 00:23:23.009 "zcopy": true, 00:23:23.009 "get_zone_info": false, 00:23:23.009 "zone_management": false, 00:23:23.009 "zone_append": false, 00:23:23.009 "compare": false, 00:23:23.009 "compare_and_write": false, 00:23:23.009 "abort": true, 00:23:23.009 "seek_hole": false, 00:23:23.009 "seek_data": false, 00:23:23.009 "copy": true, 00:23:23.009 "nvme_iov_md": false 00:23:23.009 }, 00:23:23.009 "memory_domains": [ 00:23:23.009 { 00:23:23.009 "dma_device_id": "system", 00:23:23.009 "dma_device_type": 1 00:23:23.009 }, 00:23:23.009 { 00:23:23.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.009 "dma_device_type": 2 00:23:23.009 } 00:23:23.009 ], 00:23:23.009 "driver_specific": {} 00:23:23.009 } 00:23:23.009 ] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.009 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.010 "name": "Existed_Raid", 00:23:23.010 "uuid": "a3503b25-bd31-42d7-83be-08518a7de5a6", 00:23:23.010 "strip_size_kb": 0, 00:23:23.010 "state": "online", 00:23:23.010 "raid_level": "raid1", 00:23:23.010 "superblock": false, 00:23:23.010 "num_base_bdevs": 4, 00:23:23.010 "num_base_bdevs_discovered": 4, 00:23:23.010 "num_base_bdevs_operational": 4, 00:23:23.010 "base_bdevs_list": [ 00:23:23.010 { 00:23:23.010 "name": "NewBaseBdev", 00:23:23.010 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:23.010 "is_configured": true, 00:23:23.010 "data_offset": 0, 00:23:23.010 "data_size": 65536 00:23:23.010 }, 00:23:23.010 { 00:23:23.010 "name": "BaseBdev2", 00:23:23.010 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:23.010 "is_configured": true, 00:23:23.010 "data_offset": 0, 00:23:23.010 "data_size": 65536 00:23:23.010 }, 00:23:23.010 { 00:23:23.010 "name": "BaseBdev3", 00:23:23.010 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:23.010 "is_configured": true, 00:23:23.010 "data_offset": 0, 00:23:23.010 "data_size": 65536 00:23:23.010 }, 00:23:23.010 { 00:23:23.010 "name": "BaseBdev4", 00:23:23.010 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:23.010 "is_configured": true, 00:23:23.010 "data_offset": 0, 00:23:23.010 "data_size": 65536 00:23:23.010 } 00:23:23.010 ] 00:23:23.010 }' 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.010 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:23.597 [2024-10-07 07:44:22.934402] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.597 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.597 "name": "Existed_Raid", 00:23:23.597 "aliases": [ 00:23:23.597 "a3503b25-bd31-42d7-83be-08518a7de5a6" 00:23:23.597 ], 00:23:23.597 "product_name": "Raid Volume", 00:23:23.597 "block_size": 512, 00:23:23.597 "num_blocks": 65536, 00:23:23.597 "uuid": "a3503b25-bd31-42d7-83be-08518a7de5a6", 00:23:23.597 "assigned_rate_limits": { 00:23:23.597 "rw_ios_per_sec": 0, 00:23:23.597 "rw_mbytes_per_sec": 0, 00:23:23.597 "r_mbytes_per_sec": 0, 00:23:23.597 "w_mbytes_per_sec": 0 00:23:23.597 }, 00:23:23.597 "claimed": false, 00:23:23.597 "zoned": false, 00:23:23.597 "supported_io_types": { 00:23:23.597 "read": true, 00:23:23.597 "write": true, 00:23:23.597 "unmap": false, 00:23:23.597 "flush": false, 00:23:23.597 "reset": true, 00:23:23.597 "nvme_admin": false, 00:23:23.597 "nvme_io": false, 00:23:23.597 "nvme_io_md": false, 00:23:23.597 "write_zeroes": true, 00:23:23.597 "zcopy": false, 00:23:23.597 "get_zone_info": false, 00:23:23.597 "zone_management": false, 00:23:23.597 "zone_append": false, 00:23:23.597 "compare": false, 00:23:23.597 "compare_and_write": false, 00:23:23.597 "abort": false, 00:23:23.597 "seek_hole": false, 00:23:23.597 "seek_data": false, 00:23:23.597 "copy": false, 00:23:23.597 "nvme_iov_md": false 00:23:23.597 }, 00:23:23.597 "memory_domains": [ 00:23:23.597 { 00:23:23.597 "dma_device_id": "system", 00:23:23.597 "dma_device_type": 1 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.597 "dma_device_type": 2 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "system", 00:23:23.597 "dma_device_type": 1 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.597 "dma_device_type": 2 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "system", 00:23:23.597 "dma_device_type": 1 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.597 "dma_device_type": 2 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "system", 00:23:23.597 "dma_device_type": 1 00:23:23.597 }, 00:23:23.597 { 00:23:23.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.597 "dma_device_type": 2 00:23:23.597 } 00:23:23.597 ], 00:23:23.597 "driver_specific": { 00:23:23.597 "raid": { 00:23:23.597 "uuid": "a3503b25-bd31-42d7-83be-08518a7de5a6", 00:23:23.597 "strip_size_kb": 0, 00:23:23.597 "state": "online", 00:23:23.597 "raid_level": "raid1", 00:23:23.597 "superblock": false, 00:23:23.597 "num_base_bdevs": 4, 00:23:23.597 "num_base_bdevs_discovered": 4, 00:23:23.597 "num_base_bdevs_operational": 4, 00:23:23.597 "base_bdevs_list": [ 00:23:23.597 { 00:23:23.597 "name": "NewBaseBdev", 00:23:23.597 "uuid": "1e2788e7-dadb-48de-a233-1014c5e7541c", 00:23:23.597 "is_configured": true, 00:23:23.597 "data_offset": 0, 00:23:23.598 "data_size": 65536 00:23:23.598 }, 00:23:23.598 { 00:23:23.598 "name": "BaseBdev2", 00:23:23.598 "uuid": "2b4d85d8-5079-4a5a-bcd7-240d2353050d", 00:23:23.598 "is_configured": true, 00:23:23.598 "data_offset": 0, 00:23:23.598 "data_size": 65536 00:23:23.598 }, 00:23:23.598 { 00:23:23.598 "name": "BaseBdev3", 00:23:23.598 "uuid": "44b28831-7107-497a-b518-b37f790cea3e", 00:23:23.598 "is_configured": true, 00:23:23.598 "data_offset": 0, 00:23:23.598 "data_size": 65536 00:23:23.598 }, 00:23:23.598 { 00:23:23.598 "name": "BaseBdev4", 00:23:23.598 "uuid": "5ed4a871-dfe6-4788-8f60-cae9074a70ea", 00:23:23.598 "is_configured": true, 00:23:23.598 "data_offset": 0, 00:23:23.598 "data_size": 65536 00:23:23.598 } 00:23:23.598 ] 00:23:23.598 } 00:23:23.598 } 00:23:23.598 }' 00:23:23.598 07:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:23.598 BaseBdev2 00:23:23.598 BaseBdev3 00:23:23.598 BaseBdev4' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.598 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 [2024-10-07 07:44:23.270089] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.857 [2024-10-07 07:44:23.270240] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.857 [2024-10-07 07:44:23.270411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.857 [2024-10-07 07:44:23.270786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.857 [2024-10-07 07:44:23.270896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73350 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 73350 ']' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # kill -0 73350 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # uname 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 73350 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 73350' 00:23:23.857 killing process with pid 73350 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # kill 73350 00:23:23.857 [2024-10-07 07:44:23.318036] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.857 07:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@977 -- # wait 73350 00:23:24.423 [2024-10-07 07:44:23.762857] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:25.800 00:23:25.800 real 0m12.282s 00:23:25.800 user 0m19.477s 00:23:25.800 sys 0m2.174s 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:25.800 ************************************ 00:23:25.800 END TEST raid_state_function_test 00:23:25.800 ************************************ 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.800 07:44:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:23:25.800 07:44:25 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:25.800 07:44:25 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:25.800 07:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.800 ************************************ 00:23:25.800 START TEST raid_state_function_test_sb 00:23:25.800 ************************************ 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 4 true 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:25.800 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74027 00:23:25.801 Process raid pid: 74027 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74027' 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74027 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 74027 ']' 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:25.801 07:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.801 [2024-10-07 07:44:25.348892] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:25.801 [2024-10-07 07:44:25.349241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.060 [2024-10-07 07:44:25.520829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.319 [2024-10-07 07:44:25.760382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.579 [2024-10-07 07:44:26.004925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.579 [2024-10-07 07:44:26.005180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.838 [2024-10-07 07:44:26.307420] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:26.838 [2024-10-07 07:44:26.307601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:26.838 [2024-10-07 07:44:26.307715] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:26.838 [2024-10-07 07:44:26.307789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:26.838 [2024-10-07 07:44:26.307824] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:26.838 [2024-10-07 07:44:26.307958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:26.838 [2024-10-07 07:44:26.307996] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:26.838 [2024-10-07 07:44:26.308035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:26.838 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.838 "name": "Existed_Raid", 00:23:26.838 "uuid": "62ffb6ab-9fec-4845-960c-74d9fac9090a", 00:23:26.838 "strip_size_kb": 0, 00:23:26.838 "state": "configuring", 00:23:26.838 "raid_level": "raid1", 00:23:26.838 "superblock": true, 00:23:26.838 "num_base_bdevs": 4, 00:23:26.838 "num_base_bdevs_discovered": 0, 00:23:26.838 "num_base_bdevs_operational": 4, 00:23:26.838 "base_bdevs_list": [ 00:23:26.838 { 00:23:26.838 "name": "BaseBdev1", 00:23:26.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.838 "is_configured": false, 00:23:26.838 "data_offset": 0, 00:23:26.838 "data_size": 0 00:23:26.838 }, 00:23:26.838 { 00:23:26.838 "name": "BaseBdev2", 00:23:26.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.838 "is_configured": false, 00:23:26.838 "data_offset": 0, 00:23:26.838 "data_size": 0 00:23:26.838 }, 00:23:26.838 { 00:23:26.839 "name": "BaseBdev3", 00:23:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.839 "is_configured": false, 00:23:26.839 "data_offset": 0, 00:23:26.839 "data_size": 0 00:23:26.839 }, 00:23:26.839 { 00:23:26.839 "name": "BaseBdev4", 00:23:26.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.839 "is_configured": false, 00:23:26.839 "data_offset": 0, 00:23:26.839 "data_size": 0 00:23:26.839 } 00:23:26.839 ] 00:23:26.839 }' 00:23:26.839 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.839 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.417 [2024-10-07 07:44:26.787423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:27.417 [2024-10-07 07:44:26.787473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.417 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.417 [2024-10-07 07:44:26.799469] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:27.417 [2024-10-07 07:44:26.799804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:27.417 [2024-10-07 07:44:26.799917] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:27.417 [2024-10-07 07:44:26.799966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:27.417 [2024-10-07 07:44:26.800111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:27.417 [2024-10-07 07:44:26.800157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:27.417 [2024-10-07 07:44:26.800189] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:27.418 [2024-10-07 07:44:26.800348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 [2024-10-07 07:44:26.859936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.418 BaseBdev1 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 [ 00:23:27.418 { 00:23:27.418 "name": "BaseBdev1", 00:23:27.418 "aliases": [ 00:23:27.418 "f66129e7-7055-4aa2-a1e8-47e0bcda6702" 00:23:27.418 ], 00:23:27.418 "product_name": "Malloc disk", 00:23:27.418 "block_size": 512, 00:23:27.418 "num_blocks": 65536, 00:23:27.418 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:27.418 "assigned_rate_limits": { 00:23:27.418 "rw_ios_per_sec": 0, 00:23:27.418 "rw_mbytes_per_sec": 0, 00:23:27.418 "r_mbytes_per_sec": 0, 00:23:27.418 "w_mbytes_per_sec": 0 00:23:27.418 }, 00:23:27.418 "claimed": true, 00:23:27.418 "claim_type": "exclusive_write", 00:23:27.418 "zoned": false, 00:23:27.418 "supported_io_types": { 00:23:27.418 "read": true, 00:23:27.418 "write": true, 00:23:27.418 "unmap": true, 00:23:27.418 "flush": true, 00:23:27.418 "reset": true, 00:23:27.418 "nvme_admin": false, 00:23:27.418 "nvme_io": false, 00:23:27.418 "nvme_io_md": false, 00:23:27.418 "write_zeroes": true, 00:23:27.418 "zcopy": true, 00:23:27.418 "get_zone_info": false, 00:23:27.418 "zone_management": false, 00:23:27.418 "zone_append": false, 00:23:27.418 "compare": false, 00:23:27.418 "compare_and_write": false, 00:23:27.418 "abort": true, 00:23:27.418 "seek_hole": false, 00:23:27.418 "seek_data": false, 00:23:27.418 "copy": true, 00:23:27.418 "nvme_iov_md": false 00:23:27.418 }, 00:23:27.418 "memory_domains": [ 00:23:27.418 { 00:23:27.418 "dma_device_id": "system", 00:23:27.418 "dma_device_type": 1 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.418 "dma_device_type": 2 00:23:27.418 } 00:23:27.418 ], 00:23:27.418 "driver_specific": {} 00:23:27.418 } 00:23:27.418 ] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.418 "name": "Existed_Raid", 00:23:27.418 "uuid": "1c03802e-4bdf-4c4b-a344-625c2cee6d73", 00:23:27.418 "strip_size_kb": 0, 00:23:27.418 "state": "configuring", 00:23:27.418 "raid_level": "raid1", 00:23:27.418 "superblock": true, 00:23:27.418 "num_base_bdevs": 4, 00:23:27.418 "num_base_bdevs_discovered": 1, 00:23:27.418 "num_base_bdevs_operational": 4, 00:23:27.418 "base_bdevs_list": [ 00:23:27.418 { 00:23:27.418 "name": "BaseBdev1", 00:23:27.418 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:27.418 "is_configured": true, 00:23:27.418 "data_offset": 2048, 00:23:27.418 "data_size": 63488 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "name": "BaseBdev2", 00:23:27.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.418 "is_configured": false, 00:23:27.418 "data_offset": 0, 00:23:27.418 "data_size": 0 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "name": "BaseBdev3", 00:23:27.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.418 "is_configured": false, 00:23:27.418 "data_offset": 0, 00:23:27.418 "data_size": 0 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "name": "BaseBdev4", 00:23:27.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.418 "is_configured": false, 00:23:27.418 "data_offset": 0, 00:23:27.418 "data_size": 0 00:23:27.418 } 00:23:27.418 ] 00:23:27.418 }' 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.418 07:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.985 [2024-10-07 07:44:27.384101] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:27.985 [2024-10-07 07:44:27.384154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.985 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.985 [2024-10-07 07:44:27.392135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.986 [2024-10-07 07:44:27.394465] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:27.986 [2024-10-07 07:44:27.394617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:27.986 [2024-10-07 07:44:27.394751] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:27.986 [2024-10-07 07:44:27.394804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:27.986 [2024-10-07 07:44:27.394837] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:27.986 [2024-10-07 07:44:27.394872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.986 "name": "Existed_Raid", 00:23:27.986 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:27.986 "strip_size_kb": 0, 00:23:27.986 "state": "configuring", 00:23:27.986 "raid_level": "raid1", 00:23:27.986 "superblock": true, 00:23:27.986 "num_base_bdevs": 4, 00:23:27.986 "num_base_bdevs_discovered": 1, 00:23:27.986 "num_base_bdevs_operational": 4, 00:23:27.986 "base_bdevs_list": [ 00:23:27.986 { 00:23:27.986 "name": "BaseBdev1", 00:23:27.986 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:27.986 "is_configured": true, 00:23:27.986 "data_offset": 2048, 00:23:27.986 "data_size": 63488 00:23:27.986 }, 00:23:27.986 { 00:23:27.986 "name": "BaseBdev2", 00:23:27.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.986 "is_configured": false, 00:23:27.986 "data_offset": 0, 00:23:27.986 "data_size": 0 00:23:27.986 }, 00:23:27.986 { 00:23:27.986 "name": "BaseBdev3", 00:23:27.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.986 "is_configured": false, 00:23:27.986 "data_offset": 0, 00:23:27.986 "data_size": 0 00:23:27.986 }, 00:23:27.986 { 00:23:27.986 "name": "BaseBdev4", 00:23:27.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.986 "is_configured": false, 00:23:27.986 "data_offset": 0, 00:23:27.986 "data_size": 0 00:23:27.986 } 00:23:27.986 ] 00:23:27.986 }' 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.986 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.553 BaseBdev2 00:23:28.553 [2024-10-07 07:44:27.867881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.553 [ 00:23:28.553 { 00:23:28.553 "name": "BaseBdev2", 00:23:28.553 "aliases": [ 00:23:28.553 "d76b105f-6c32-4eb7-bea9-591a6fd92c0d" 00:23:28.553 ], 00:23:28.553 "product_name": "Malloc disk", 00:23:28.553 "block_size": 512, 00:23:28.553 "num_blocks": 65536, 00:23:28.553 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:28.553 "assigned_rate_limits": { 00:23:28.553 "rw_ios_per_sec": 0, 00:23:28.553 "rw_mbytes_per_sec": 0, 00:23:28.553 "r_mbytes_per_sec": 0, 00:23:28.553 "w_mbytes_per_sec": 0 00:23:28.553 }, 00:23:28.553 "claimed": true, 00:23:28.553 "claim_type": "exclusive_write", 00:23:28.553 "zoned": false, 00:23:28.553 "supported_io_types": { 00:23:28.553 "read": true, 00:23:28.553 "write": true, 00:23:28.553 "unmap": true, 00:23:28.553 "flush": true, 00:23:28.553 "reset": true, 00:23:28.553 "nvme_admin": false, 00:23:28.553 "nvme_io": false, 00:23:28.553 "nvme_io_md": false, 00:23:28.553 "write_zeroes": true, 00:23:28.553 "zcopy": true, 00:23:28.553 "get_zone_info": false, 00:23:28.553 "zone_management": false, 00:23:28.553 "zone_append": false, 00:23:28.553 "compare": false, 00:23:28.553 "compare_and_write": false, 00:23:28.553 "abort": true, 00:23:28.553 "seek_hole": false, 00:23:28.553 "seek_data": false, 00:23:28.553 "copy": true, 00:23:28.553 "nvme_iov_md": false 00:23:28.553 }, 00:23:28.553 "memory_domains": [ 00:23:28.553 { 00:23:28.553 "dma_device_id": "system", 00:23:28.553 "dma_device_type": 1 00:23:28.553 }, 00:23:28.553 { 00:23:28.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.553 "dma_device_type": 2 00:23:28.553 } 00:23:28.553 ], 00:23:28.553 "driver_specific": {} 00:23:28.553 } 00:23:28.553 ] 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:28.553 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.554 "name": "Existed_Raid", 00:23:28.554 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:28.554 "strip_size_kb": 0, 00:23:28.554 "state": "configuring", 00:23:28.554 "raid_level": "raid1", 00:23:28.554 "superblock": true, 00:23:28.554 "num_base_bdevs": 4, 00:23:28.554 "num_base_bdevs_discovered": 2, 00:23:28.554 "num_base_bdevs_operational": 4, 00:23:28.554 "base_bdevs_list": [ 00:23:28.554 { 00:23:28.554 "name": "BaseBdev1", 00:23:28.554 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:28.554 "is_configured": true, 00:23:28.554 "data_offset": 2048, 00:23:28.554 "data_size": 63488 00:23:28.554 }, 00:23:28.554 { 00:23:28.554 "name": "BaseBdev2", 00:23:28.554 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:28.554 "is_configured": true, 00:23:28.554 "data_offset": 2048, 00:23:28.554 "data_size": 63488 00:23:28.554 }, 00:23:28.554 { 00:23:28.554 "name": "BaseBdev3", 00:23:28.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.554 "is_configured": false, 00:23:28.554 "data_offset": 0, 00:23:28.554 "data_size": 0 00:23:28.554 }, 00:23:28.554 { 00:23:28.554 "name": "BaseBdev4", 00:23:28.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.554 "is_configured": false, 00:23:28.554 "data_offset": 0, 00:23:28.554 "data_size": 0 00:23:28.554 } 00:23:28.554 ] 00:23:28.554 }' 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.554 07:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.812 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:28.812 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:28.812 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 [2024-10-07 07:44:28.399165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:29.070 BaseBdev3 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 [ 00:23:29.070 { 00:23:29.070 "name": "BaseBdev3", 00:23:29.070 "aliases": [ 00:23:29.070 "c3bbdb44-e4f0-47b7-a69f-e450bb993806" 00:23:29.070 ], 00:23:29.070 "product_name": "Malloc disk", 00:23:29.070 "block_size": 512, 00:23:29.070 "num_blocks": 65536, 00:23:29.070 "uuid": "c3bbdb44-e4f0-47b7-a69f-e450bb993806", 00:23:29.070 "assigned_rate_limits": { 00:23:29.070 "rw_ios_per_sec": 0, 00:23:29.070 "rw_mbytes_per_sec": 0, 00:23:29.070 "r_mbytes_per_sec": 0, 00:23:29.070 "w_mbytes_per_sec": 0 00:23:29.070 }, 00:23:29.070 "claimed": true, 00:23:29.070 "claim_type": "exclusive_write", 00:23:29.070 "zoned": false, 00:23:29.070 "supported_io_types": { 00:23:29.070 "read": true, 00:23:29.070 "write": true, 00:23:29.070 "unmap": true, 00:23:29.070 "flush": true, 00:23:29.070 "reset": true, 00:23:29.070 "nvme_admin": false, 00:23:29.070 "nvme_io": false, 00:23:29.070 "nvme_io_md": false, 00:23:29.070 "write_zeroes": true, 00:23:29.070 "zcopy": true, 00:23:29.070 "get_zone_info": false, 00:23:29.070 "zone_management": false, 00:23:29.070 "zone_append": false, 00:23:29.070 "compare": false, 00:23:29.070 "compare_and_write": false, 00:23:29.070 "abort": true, 00:23:29.070 "seek_hole": false, 00:23:29.070 "seek_data": false, 00:23:29.070 "copy": true, 00:23:29.070 "nvme_iov_md": false 00:23:29.070 }, 00:23:29.070 "memory_domains": [ 00:23:29.070 { 00:23:29.070 "dma_device_id": "system", 00:23:29.070 "dma_device_type": 1 00:23:29.070 }, 00:23:29.070 { 00:23:29.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.070 "dma_device_type": 2 00:23:29.070 } 00:23:29.070 ], 00:23:29.070 "driver_specific": {} 00:23:29.070 } 00:23:29.070 ] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.070 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.070 "name": "Existed_Raid", 00:23:29.070 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:29.070 "strip_size_kb": 0, 00:23:29.070 "state": "configuring", 00:23:29.070 "raid_level": "raid1", 00:23:29.070 "superblock": true, 00:23:29.070 "num_base_bdevs": 4, 00:23:29.070 "num_base_bdevs_discovered": 3, 00:23:29.070 "num_base_bdevs_operational": 4, 00:23:29.070 "base_bdevs_list": [ 00:23:29.070 { 00:23:29.070 "name": "BaseBdev1", 00:23:29.070 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:29.070 "is_configured": true, 00:23:29.070 "data_offset": 2048, 00:23:29.071 "data_size": 63488 00:23:29.071 }, 00:23:29.071 { 00:23:29.071 "name": "BaseBdev2", 00:23:29.071 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:29.071 "is_configured": true, 00:23:29.071 "data_offset": 2048, 00:23:29.071 "data_size": 63488 00:23:29.071 }, 00:23:29.071 { 00:23:29.071 "name": "BaseBdev3", 00:23:29.071 "uuid": "c3bbdb44-e4f0-47b7-a69f-e450bb993806", 00:23:29.071 "is_configured": true, 00:23:29.071 "data_offset": 2048, 00:23:29.071 "data_size": 63488 00:23:29.071 }, 00:23:29.071 { 00:23:29.071 "name": "BaseBdev4", 00:23:29.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.071 "is_configured": false, 00:23:29.071 "data_offset": 0, 00:23:29.071 "data_size": 0 00:23:29.071 } 00:23:29.071 ] 00:23:29.071 }' 00:23:29.071 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.071 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 [2024-10-07 07:44:28.943541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:29.636 [2024-10-07 07:44:28.943915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:29.636 [2024-10-07 07:44:28.943941] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:29.636 [2024-10-07 07:44:28.944296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:29.636 BaseBdev4 00:23:29.636 [2024-10-07 07:44:28.944468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:29.636 [2024-10-07 07:44:28.944493] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:29.636 [2024-10-07 07:44:28.944673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.636 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 [ 00:23:29.636 { 00:23:29.636 "name": "BaseBdev4", 00:23:29.636 "aliases": [ 00:23:29.636 "e3d6a1f6-cd0a-44e3-8bf5-aed01ce40c39" 00:23:29.636 ], 00:23:29.636 "product_name": "Malloc disk", 00:23:29.636 "block_size": 512, 00:23:29.636 "num_blocks": 65536, 00:23:29.636 "uuid": "e3d6a1f6-cd0a-44e3-8bf5-aed01ce40c39", 00:23:29.636 "assigned_rate_limits": { 00:23:29.636 "rw_ios_per_sec": 0, 00:23:29.636 "rw_mbytes_per_sec": 0, 00:23:29.636 "r_mbytes_per_sec": 0, 00:23:29.636 "w_mbytes_per_sec": 0 00:23:29.636 }, 00:23:29.636 "claimed": true, 00:23:29.636 "claim_type": "exclusive_write", 00:23:29.636 "zoned": false, 00:23:29.636 "supported_io_types": { 00:23:29.636 "read": true, 00:23:29.636 "write": true, 00:23:29.636 "unmap": true, 00:23:29.636 "flush": true, 00:23:29.636 "reset": true, 00:23:29.636 "nvme_admin": false, 00:23:29.636 "nvme_io": false, 00:23:29.636 "nvme_io_md": false, 00:23:29.636 "write_zeroes": true, 00:23:29.636 "zcopy": true, 00:23:29.636 "get_zone_info": false, 00:23:29.636 "zone_management": false, 00:23:29.637 "zone_append": false, 00:23:29.637 "compare": false, 00:23:29.637 "compare_and_write": false, 00:23:29.637 "abort": true, 00:23:29.637 "seek_hole": false, 00:23:29.637 "seek_data": false, 00:23:29.637 "copy": true, 00:23:29.637 "nvme_iov_md": false 00:23:29.637 }, 00:23:29.637 "memory_domains": [ 00:23:29.637 { 00:23:29.637 "dma_device_id": "system", 00:23:29.637 "dma_device_type": 1 00:23:29.637 }, 00:23:29.637 { 00:23:29.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.637 "dma_device_type": 2 00:23:29.637 } 00:23:29.637 ], 00:23:29.637 "driver_specific": {} 00:23:29.637 } 00:23:29.637 ] 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.637 07:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.637 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:29.637 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.637 "name": "Existed_Raid", 00:23:29.637 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:29.637 "strip_size_kb": 0, 00:23:29.637 "state": "online", 00:23:29.637 "raid_level": "raid1", 00:23:29.637 "superblock": true, 00:23:29.637 "num_base_bdevs": 4, 00:23:29.637 "num_base_bdevs_discovered": 4, 00:23:29.637 "num_base_bdevs_operational": 4, 00:23:29.637 "base_bdevs_list": [ 00:23:29.637 { 00:23:29.637 "name": "BaseBdev1", 00:23:29.637 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:29.637 "is_configured": true, 00:23:29.637 "data_offset": 2048, 00:23:29.637 "data_size": 63488 00:23:29.637 }, 00:23:29.637 { 00:23:29.637 "name": "BaseBdev2", 00:23:29.637 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:29.637 "is_configured": true, 00:23:29.637 "data_offset": 2048, 00:23:29.637 "data_size": 63488 00:23:29.637 }, 00:23:29.637 { 00:23:29.637 "name": "BaseBdev3", 00:23:29.637 "uuid": "c3bbdb44-e4f0-47b7-a69f-e450bb993806", 00:23:29.637 "is_configured": true, 00:23:29.637 "data_offset": 2048, 00:23:29.637 "data_size": 63488 00:23:29.637 }, 00:23:29.637 { 00:23:29.637 "name": "BaseBdev4", 00:23:29.637 "uuid": "e3d6a1f6-cd0a-44e3-8bf5-aed01ce40c39", 00:23:29.637 "is_configured": true, 00:23:29.637 "data_offset": 2048, 00:23:29.637 "data_size": 63488 00:23:29.637 } 00:23:29.637 ] 00:23:29.637 }' 00:23:29.637 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.637 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.202 [2024-10-07 07:44:29.492110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.202 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.202 "name": "Existed_Raid", 00:23:30.202 "aliases": [ 00:23:30.202 "619b1dbf-505e-4071-a08d-5463e7158b57" 00:23:30.202 ], 00:23:30.202 "product_name": "Raid Volume", 00:23:30.202 "block_size": 512, 00:23:30.202 "num_blocks": 63488, 00:23:30.202 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:30.202 "assigned_rate_limits": { 00:23:30.202 "rw_ios_per_sec": 0, 00:23:30.202 "rw_mbytes_per_sec": 0, 00:23:30.202 "r_mbytes_per_sec": 0, 00:23:30.202 "w_mbytes_per_sec": 0 00:23:30.202 }, 00:23:30.202 "claimed": false, 00:23:30.202 "zoned": false, 00:23:30.202 "supported_io_types": { 00:23:30.202 "read": true, 00:23:30.202 "write": true, 00:23:30.202 "unmap": false, 00:23:30.202 "flush": false, 00:23:30.202 "reset": true, 00:23:30.202 "nvme_admin": false, 00:23:30.202 "nvme_io": false, 00:23:30.202 "nvme_io_md": false, 00:23:30.202 "write_zeroes": true, 00:23:30.202 "zcopy": false, 00:23:30.202 "get_zone_info": false, 00:23:30.202 "zone_management": false, 00:23:30.202 "zone_append": false, 00:23:30.202 "compare": false, 00:23:30.202 "compare_and_write": false, 00:23:30.202 "abort": false, 00:23:30.202 "seek_hole": false, 00:23:30.202 "seek_data": false, 00:23:30.202 "copy": false, 00:23:30.202 "nvme_iov_md": false 00:23:30.202 }, 00:23:30.202 "memory_domains": [ 00:23:30.202 { 00:23:30.202 "dma_device_id": "system", 00:23:30.202 "dma_device_type": 1 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.202 "dma_device_type": 2 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "system", 00:23:30.202 "dma_device_type": 1 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.202 "dma_device_type": 2 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "system", 00:23:30.202 "dma_device_type": 1 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.202 "dma_device_type": 2 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "system", 00:23:30.202 "dma_device_type": 1 00:23:30.202 }, 00:23:30.202 { 00:23:30.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.203 "dma_device_type": 2 00:23:30.203 } 00:23:30.203 ], 00:23:30.203 "driver_specific": { 00:23:30.203 "raid": { 00:23:30.203 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:30.203 "strip_size_kb": 0, 00:23:30.203 "state": "online", 00:23:30.203 "raid_level": "raid1", 00:23:30.203 "superblock": true, 00:23:30.203 "num_base_bdevs": 4, 00:23:30.203 "num_base_bdevs_discovered": 4, 00:23:30.203 "num_base_bdevs_operational": 4, 00:23:30.203 "base_bdevs_list": [ 00:23:30.203 { 00:23:30.203 "name": "BaseBdev1", 00:23:30.203 "uuid": "f66129e7-7055-4aa2-a1e8-47e0bcda6702", 00:23:30.203 "is_configured": true, 00:23:30.203 "data_offset": 2048, 00:23:30.203 "data_size": 63488 00:23:30.203 }, 00:23:30.203 { 00:23:30.203 "name": "BaseBdev2", 00:23:30.203 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:30.203 "is_configured": true, 00:23:30.203 "data_offset": 2048, 00:23:30.203 "data_size": 63488 00:23:30.203 }, 00:23:30.203 { 00:23:30.203 "name": "BaseBdev3", 00:23:30.203 "uuid": "c3bbdb44-e4f0-47b7-a69f-e450bb993806", 00:23:30.203 "is_configured": true, 00:23:30.203 "data_offset": 2048, 00:23:30.203 "data_size": 63488 00:23:30.203 }, 00:23:30.203 { 00:23:30.203 "name": "BaseBdev4", 00:23:30.203 "uuid": "e3d6a1f6-cd0a-44e3-8bf5-aed01ce40c39", 00:23:30.203 "is_configured": true, 00:23:30.203 "data_offset": 2048, 00:23:30.203 "data_size": 63488 00:23:30.203 } 00:23:30.203 ] 00:23:30.203 } 00:23:30.203 } 00:23:30.203 }' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:30.203 BaseBdev2 00:23:30.203 BaseBdev3 00:23:30.203 BaseBdev4' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.203 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.460 [2024-10-07 07:44:29.823886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.460 "name": "Existed_Raid", 00:23:30.460 "uuid": "619b1dbf-505e-4071-a08d-5463e7158b57", 00:23:30.460 "strip_size_kb": 0, 00:23:30.460 "state": "online", 00:23:30.460 "raid_level": "raid1", 00:23:30.460 "superblock": true, 00:23:30.460 "num_base_bdevs": 4, 00:23:30.460 "num_base_bdevs_discovered": 3, 00:23:30.460 "num_base_bdevs_operational": 3, 00:23:30.460 "base_bdevs_list": [ 00:23:30.460 { 00:23:30.460 "name": null, 00:23:30.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.460 "is_configured": false, 00:23:30.460 "data_offset": 0, 00:23:30.460 "data_size": 63488 00:23:30.460 }, 00:23:30.460 { 00:23:30.460 "name": "BaseBdev2", 00:23:30.460 "uuid": "d76b105f-6c32-4eb7-bea9-591a6fd92c0d", 00:23:30.460 "is_configured": true, 00:23:30.460 "data_offset": 2048, 00:23:30.460 "data_size": 63488 00:23:30.460 }, 00:23:30.460 { 00:23:30.460 "name": "BaseBdev3", 00:23:30.460 "uuid": "c3bbdb44-e4f0-47b7-a69f-e450bb993806", 00:23:30.460 "is_configured": true, 00:23:30.460 "data_offset": 2048, 00:23:30.460 "data_size": 63488 00:23:30.460 }, 00:23:30.460 { 00:23:30.460 "name": "BaseBdev4", 00:23:30.460 "uuid": "e3d6a1f6-cd0a-44e3-8bf5-aed01ce40c39", 00:23:30.460 "is_configured": true, 00:23:30.460 "data_offset": 2048, 00:23:30.460 "data_size": 63488 00:23:30.460 } 00:23:30.460 ] 00:23:30.460 }' 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.460 07:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.026 [2024-10-07 07:44:30.411120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.026 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.026 [2024-10-07 07:44:30.580954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.284 [2024-10-07 07:44:30.725083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:31.284 [2024-10-07 07:44:30.725338] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.284 [2024-10-07 07:44:30.826810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.284 [2024-10-07 07:44:30.827094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.284 [2024-10-07 07:44:30.827136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.284 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.285 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:31.285 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 BaseBdev2 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 [ 00:23:31.544 { 00:23:31.544 "name": "BaseBdev2", 00:23:31.544 "aliases": [ 00:23:31.544 "f9a627ce-e775-4255-906c-1cca2e0f41b6" 00:23:31.544 ], 00:23:31.544 "product_name": "Malloc disk", 00:23:31.544 "block_size": 512, 00:23:31.544 "num_blocks": 65536, 00:23:31.544 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:31.544 "assigned_rate_limits": { 00:23:31.544 "rw_ios_per_sec": 0, 00:23:31.544 "rw_mbytes_per_sec": 0, 00:23:31.544 "r_mbytes_per_sec": 0, 00:23:31.544 "w_mbytes_per_sec": 0 00:23:31.544 }, 00:23:31.544 "claimed": false, 00:23:31.544 "zoned": false, 00:23:31.544 "supported_io_types": { 00:23:31.544 "read": true, 00:23:31.544 "write": true, 00:23:31.544 "unmap": true, 00:23:31.544 "flush": true, 00:23:31.544 "reset": true, 00:23:31.544 "nvme_admin": false, 00:23:31.544 "nvme_io": false, 00:23:31.544 "nvme_io_md": false, 00:23:31.544 "write_zeroes": true, 00:23:31.544 "zcopy": true, 00:23:31.544 "get_zone_info": false, 00:23:31.544 "zone_management": false, 00:23:31.544 "zone_append": false, 00:23:31.544 "compare": false, 00:23:31.544 "compare_and_write": false, 00:23:31.544 "abort": true, 00:23:31.544 "seek_hole": false, 00:23:31.544 "seek_data": false, 00:23:31.544 "copy": true, 00:23:31.544 "nvme_iov_md": false 00:23:31.544 }, 00:23:31.544 "memory_domains": [ 00:23:31.544 { 00:23:31.544 "dma_device_id": "system", 00:23:31.544 "dma_device_type": 1 00:23:31.544 }, 00:23:31.544 { 00:23:31.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.544 "dma_device_type": 2 00:23:31.544 } 00:23:31.544 ], 00:23:31.544 "driver_specific": {} 00:23:31.544 } 00:23:31.544 ] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 BaseBdev3 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 [ 00:23:31.544 { 00:23:31.544 "name": "BaseBdev3", 00:23:31.544 "aliases": [ 00:23:31.544 "4c4c0f47-e67c-43f1-a697-6320ac06ae43" 00:23:31.544 ], 00:23:31.544 "product_name": "Malloc disk", 00:23:31.544 "block_size": 512, 00:23:31.544 "num_blocks": 65536, 00:23:31.544 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:31.544 "assigned_rate_limits": { 00:23:31.544 "rw_ios_per_sec": 0, 00:23:31.544 "rw_mbytes_per_sec": 0, 00:23:31.544 "r_mbytes_per_sec": 0, 00:23:31.544 "w_mbytes_per_sec": 0 00:23:31.544 }, 00:23:31.544 "claimed": false, 00:23:31.544 "zoned": false, 00:23:31.544 "supported_io_types": { 00:23:31.544 "read": true, 00:23:31.544 "write": true, 00:23:31.544 "unmap": true, 00:23:31.544 "flush": true, 00:23:31.544 "reset": true, 00:23:31.544 "nvme_admin": false, 00:23:31.544 "nvme_io": false, 00:23:31.544 "nvme_io_md": false, 00:23:31.544 "write_zeroes": true, 00:23:31.544 "zcopy": true, 00:23:31.544 "get_zone_info": false, 00:23:31.544 "zone_management": false, 00:23:31.544 "zone_append": false, 00:23:31.544 "compare": false, 00:23:31.544 "compare_and_write": false, 00:23:31.544 "abort": true, 00:23:31.544 "seek_hole": false, 00:23:31.544 "seek_data": false, 00:23:31.544 "copy": true, 00:23:31.544 "nvme_iov_md": false 00:23:31.544 }, 00:23:31.544 "memory_domains": [ 00:23:31.544 { 00:23:31.544 "dma_device_id": "system", 00:23:31.544 "dma_device_type": 1 00:23:31.544 }, 00:23:31.544 { 00:23:31.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.544 "dma_device_type": 2 00:23:31.544 } 00:23:31.544 ], 00:23:31.544 "driver_specific": {} 00:23:31.544 } 00:23:31.544 ] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 BaseBdev4 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.544 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.802 [ 00:23:31.802 { 00:23:31.802 "name": "BaseBdev4", 00:23:31.802 "aliases": [ 00:23:31.802 "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62" 00:23:31.802 ], 00:23:31.802 "product_name": "Malloc disk", 00:23:31.802 "block_size": 512, 00:23:31.802 "num_blocks": 65536, 00:23:31.802 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:31.802 "assigned_rate_limits": { 00:23:31.802 "rw_ios_per_sec": 0, 00:23:31.802 "rw_mbytes_per_sec": 0, 00:23:31.802 "r_mbytes_per_sec": 0, 00:23:31.802 "w_mbytes_per_sec": 0 00:23:31.802 }, 00:23:31.802 "claimed": false, 00:23:31.802 "zoned": false, 00:23:31.802 "supported_io_types": { 00:23:31.802 "read": true, 00:23:31.802 "write": true, 00:23:31.802 "unmap": true, 00:23:31.802 "flush": true, 00:23:31.802 "reset": true, 00:23:31.802 "nvme_admin": false, 00:23:31.802 "nvme_io": false, 00:23:31.802 "nvme_io_md": false, 00:23:31.802 "write_zeroes": true, 00:23:31.802 "zcopy": true, 00:23:31.802 "get_zone_info": false, 00:23:31.802 "zone_management": false, 00:23:31.802 "zone_append": false, 00:23:31.802 "compare": false, 00:23:31.802 "compare_and_write": false, 00:23:31.802 "abort": true, 00:23:31.802 "seek_hole": false, 00:23:31.802 "seek_data": false, 00:23:31.802 "copy": true, 00:23:31.803 "nvme_iov_md": false 00:23:31.803 }, 00:23:31.803 "memory_domains": [ 00:23:31.803 { 00:23:31.803 "dma_device_id": "system", 00:23:31.803 "dma_device_type": 1 00:23:31.803 }, 00:23:31.803 { 00:23:31.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.803 "dma_device_type": 2 00:23:31.803 } 00:23:31.803 ], 00:23:31.803 "driver_specific": {} 00:23:31.803 } 00:23:31.803 ] 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.803 [2024-10-07 07:44:31.124977] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:31.803 [2024-10-07 07:44:31.125174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:31.803 [2024-10-07 07:44:31.125214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.803 [2024-10-07 07:44:31.127576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.803 [2024-10-07 07:44:31.127791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.803 "name": "Existed_Raid", 00:23:31.803 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:31.803 "strip_size_kb": 0, 00:23:31.803 "state": "configuring", 00:23:31.803 "raid_level": "raid1", 00:23:31.803 "superblock": true, 00:23:31.803 "num_base_bdevs": 4, 00:23:31.803 "num_base_bdevs_discovered": 3, 00:23:31.803 "num_base_bdevs_operational": 4, 00:23:31.803 "base_bdevs_list": [ 00:23:31.803 { 00:23:31.803 "name": "BaseBdev1", 00:23:31.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.803 "is_configured": false, 00:23:31.803 "data_offset": 0, 00:23:31.803 "data_size": 0 00:23:31.803 }, 00:23:31.803 { 00:23:31.803 "name": "BaseBdev2", 00:23:31.803 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:31.803 "is_configured": true, 00:23:31.803 "data_offset": 2048, 00:23:31.803 "data_size": 63488 00:23:31.803 }, 00:23:31.803 { 00:23:31.803 "name": "BaseBdev3", 00:23:31.803 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:31.803 "is_configured": true, 00:23:31.803 "data_offset": 2048, 00:23:31.803 "data_size": 63488 00:23:31.803 }, 00:23:31.803 { 00:23:31.803 "name": "BaseBdev4", 00:23:31.803 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:31.803 "is_configured": true, 00:23:31.803 "data_offset": 2048, 00:23:31.803 "data_size": 63488 00:23:31.803 } 00:23:31.803 ] 00:23:31.803 }' 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.803 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.064 [2024-10-07 07:44:31.525046] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.064 "name": "Existed_Raid", 00:23:32.064 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:32.064 "strip_size_kb": 0, 00:23:32.064 "state": "configuring", 00:23:32.064 "raid_level": "raid1", 00:23:32.064 "superblock": true, 00:23:32.064 "num_base_bdevs": 4, 00:23:32.064 "num_base_bdevs_discovered": 2, 00:23:32.064 "num_base_bdevs_operational": 4, 00:23:32.064 "base_bdevs_list": [ 00:23:32.064 { 00:23:32.064 "name": "BaseBdev1", 00:23:32.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.064 "is_configured": false, 00:23:32.064 "data_offset": 0, 00:23:32.064 "data_size": 0 00:23:32.064 }, 00:23:32.064 { 00:23:32.064 "name": null, 00:23:32.064 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:32.064 "is_configured": false, 00:23:32.064 "data_offset": 0, 00:23:32.064 "data_size": 63488 00:23:32.064 }, 00:23:32.064 { 00:23:32.064 "name": "BaseBdev3", 00:23:32.064 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:32.064 "is_configured": true, 00:23:32.064 "data_offset": 2048, 00:23:32.064 "data_size": 63488 00:23:32.064 }, 00:23:32.064 { 00:23:32.064 "name": "BaseBdev4", 00:23:32.064 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:32.064 "is_configured": true, 00:23:32.064 "data_offset": 2048, 00:23:32.064 "data_size": 63488 00:23:32.064 } 00:23:32.064 ] 00:23:32.064 }' 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.064 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.631 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.631 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.631 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.631 07:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:32.631 07:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.631 [2024-10-07 07:44:32.061541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:32.631 BaseBdev1 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.631 [ 00:23:32.631 { 00:23:32.631 "name": "BaseBdev1", 00:23:32.631 "aliases": [ 00:23:32.631 "3b1b3f46-9982-4798-8125-d9a17db8395f" 00:23:32.631 ], 00:23:32.631 "product_name": "Malloc disk", 00:23:32.631 "block_size": 512, 00:23:32.631 "num_blocks": 65536, 00:23:32.631 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:32.631 "assigned_rate_limits": { 00:23:32.631 "rw_ios_per_sec": 0, 00:23:32.631 "rw_mbytes_per_sec": 0, 00:23:32.631 "r_mbytes_per_sec": 0, 00:23:32.631 "w_mbytes_per_sec": 0 00:23:32.631 }, 00:23:32.631 "claimed": true, 00:23:32.631 "claim_type": "exclusive_write", 00:23:32.631 "zoned": false, 00:23:32.631 "supported_io_types": { 00:23:32.631 "read": true, 00:23:32.631 "write": true, 00:23:32.631 "unmap": true, 00:23:32.631 "flush": true, 00:23:32.631 "reset": true, 00:23:32.631 "nvme_admin": false, 00:23:32.631 "nvme_io": false, 00:23:32.631 "nvme_io_md": false, 00:23:32.631 "write_zeroes": true, 00:23:32.631 "zcopy": true, 00:23:32.631 "get_zone_info": false, 00:23:32.631 "zone_management": false, 00:23:32.631 "zone_append": false, 00:23:32.631 "compare": false, 00:23:32.631 "compare_and_write": false, 00:23:32.631 "abort": true, 00:23:32.631 "seek_hole": false, 00:23:32.631 "seek_data": false, 00:23:32.631 "copy": true, 00:23:32.631 "nvme_iov_md": false 00:23:32.631 }, 00:23:32.631 "memory_domains": [ 00:23:32.631 { 00:23:32.631 "dma_device_id": "system", 00:23:32.631 "dma_device_type": 1 00:23:32.631 }, 00:23:32.631 { 00:23:32.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.631 "dma_device_type": 2 00:23:32.631 } 00:23:32.631 ], 00:23:32.631 "driver_specific": {} 00:23:32.631 } 00:23:32.631 ] 00:23:32.631 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.632 "name": "Existed_Raid", 00:23:32.632 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:32.632 "strip_size_kb": 0, 00:23:32.632 "state": "configuring", 00:23:32.632 "raid_level": "raid1", 00:23:32.632 "superblock": true, 00:23:32.632 "num_base_bdevs": 4, 00:23:32.632 "num_base_bdevs_discovered": 3, 00:23:32.632 "num_base_bdevs_operational": 4, 00:23:32.632 "base_bdevs_list": [ 00:23:32.632 { 00:23:32.632 "name": "BaseBdev1", 00:23:32.632 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:32.632 "is_configured": true, 00:23:32.632 "data_offset": 2048, 00:23:32.632 "data_size": 63488 00:23:32.632 }, 00:23:32.632 { 00:23:32.632 "name": null, 00:23:32.632 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:32.632 "is_configured": false, 00:23:32.632 "data_offset": 0, 00:23:32.632 "data_size": 63488 00:23:32.632 }, 00:23:32.632 { 00:23:32.632 "name": "BaseBdev3", 00:23:32.632 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:32.632 "is_configured": true, 00:23:32.632 "data_offset": 2048, 00:23:32.632 "data_size": 63488 00:23:32.632 }, 00:23:32.632 { 00:23:32.632 "name": "BaseBdev4", 00:23:32.632 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:32.632 "is_configured": true, 00:23:32.632 "data_offset": 2048, 00:23:32.632 "data_size": 63488 00:23:32.632 } 00:23:32.632 ] 00:23:32.632 }' 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.632 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 [2024-10-07 07:44:32.569736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.199 "name": "Existed_Raid", 00:23:33.199 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:33.199 "strip_size_kb": 0, 00:23:33.199 "state": "configuring", 00:23:33.199 "raid_level": "raid1", 00:23:33.199 "superblock": true, 00:23:33.199 "num_base_bdevs": 4, 00:23:33.199 "num_base_bdevs_discovered": 2, 00:23:33.199 "num_base_bdevs_operational": 4, 00:23:33.199 "base_bdevs_list": [ 00:23:33.199 { 00:23:33.199 "name": "BaseBdev1", 00:23:33.199 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:33.199 "is_configured": true, 00:23:33.199 "data_offset": 2048, 00:23:33.199 "data_size": 63488 00:23:33.199 }, 00:23:33.199 { 00:23:33.199 "name": null, 00:23:33.199 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:33.199 "is_configured": false, 00:23:33.199 "data_offset": 0, 00:23:33.199 "data_size": 63488 00:23:33.199 }, 00:23:33.199 { 00:23:33.199 "name": null, 00:23:33.199 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:33.199 "is_configured": false, 00:23:33.199 "data_offset": 0, 00:23:33.199 "data_size": 63488 00:23:33.199 }, 00:23:33.199 { 00:23:33.199 "name": "BaseBdev4", 00:23:33.199 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:33.199 "is_configured": true, 00:23:33.199 "data_offset": 2048, 00:23:33.199 "data_size": 63488 00:23:33.199 } 00:23:33.199 ] 00:23:33.199 }' 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.199 07:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 [2024-10-07 07:44:33.085935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.769 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:33.770 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.770 "name": "Existed_Raid", 00:23:33.770 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:33.770 "strip_size_kb": 0, 00:23:33.770 "state": "configuring", 00:23:33.770 "raid_level": "raid1", 00:23:33.770 "superblock": true, 00:23:33.770 "num_base_bdevs": 4, 00:23:33.770 "num_base_bdevs_discovered": 3, 00:23:33.770 "num_base_bdevs_operational": 4, 00:23:33.770 "base_bdevs_list": [ 00:23:33.770 { 00:23:33.770 "name": "BaseBdev1", 00:23:33.770 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:33.770 "is_configured": true, 00:23:33.770 "data_offset": 2048, 00:23:33.770 "data_size": 63488 00:23:33.770 }, 00:23:33.770 { 00:23:33.770 "name": null, 00:23:33.770 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:33.770 "is_configured": false, 00:23:33.770 "data_offset": 0, 00:23:33.770 "data_size": 63488 00:23:33.770 }, 00:23:33.770 { 00:23:33.770 "name": "BaseBdev3", 00:23:33.770 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:33.770 "is_configured": true, 00:23:33.770 "data_offset": 2048, 00:23:33.770 "data_size": 63488 00:23:33.770 }, 00:23:33.770 { 00:23:33.770 "name": "BaseBdev4", 00:23:33.770 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:33.770 "is_configured": true, 00:23:33.770 "data_offset": 2048, 00:23:33.770 "data_size": 63488 00:23:33.770 } 00:23:33.770 ] 00:23:33.770 }' 00:23:33.770 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.770 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.029 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.029 [2024-10-07 07:44:33.582033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.289 "name": "Existed_Raid", 00:23:34.289 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:34.289 "strip_size_kb": 0, 00:23:34.289 "state": "configuring", 00:23:34.289 "raid_level": "raid1", 00:23:34.289 "superblock": true, 00:23:34.289 "num_base_bdevs": 4, 00:23:34.289 "num_base_bdevs_discovered": 2, 00:23:34.289 "num_base_bdevs_operational": 4, 00:23:34.289 "base_bdevs_list": [ 00:23:34.289 { 00:23:34.289 "name": null, 00:23:34.289 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:34.289 "is_configured": false, 00:23:34.289 "data_offset": 0, 00:23:34.289 "data_size": 63488 00:23:34.289 }, 00:23:34.289 { 00:23:34.289 "name": null, 00:23:34.289 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:34.289 "is_configured": false, 00:23:34.289 "data_offset": 0, 00:23:34.289 "data_size": 63488 00:23:34.289 }, 00:23:34.289 { 00:23:34.289 "name": "BaseBdev3", 00:23:34.289 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:34.289 "is_configured": true, 00:23:34.289 "data_offset": 2048, 00:23:34.289 "data_size": 63488 00:23:34.289 }, 00:23:34.289 { 00:23:34.289 "name": "BaseBdev4", 00:23:34.289 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:34.289 "is_configured": true, 00:23:34.289 "data_offset": 2048, 00:23:34.289 "data_size": 63488 00:23:34.289 } 00:23:34.289 ] 00:23:34.289 }' 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.289 07:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 [2024-10-07 07:44:34.147882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.858 "name": "Existed_Raid", 00:23:34.858 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:34.858 "strip_size_kb": 0, 00:23:34.858 "state": "configuring", 00:23:34.858 "raid_level": "raid1", 00:23:34.858 "superblock": true, 00:23:34.858 "num_base_bdevs": 4, 00:23:34.858 "num_base_bdevs_discovered": 3, 00:23:34.858 "num_base_bdevs_operational": 4, 00:23:34.858 "base_bdevs_list": [ 00:23:34.858 { 00:23:34.858 "name": null, 00:23:34.858 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:34.858 "is_configured": false, 00:23:34.858 "data_offset": 0, 00:23:34.858 "data_size": 63488 00:23:34.858 }, 00:23:34.858 { 00:23:34.858 "name": "BaseBdev2", 00:23:34.858 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:34.858 "is_configured": true, 00:23:34.858 "data_offset": 2048, 00:23:34.858 "data_size": 63488 00:23:34.858 }, 00:23:34.858 { 00:23:34.858 "name": "BaseBdev3", 00:23:34.858 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:34.858 "is_configured": true, 00:23:34.858 "data_offset": 2048, 00:23:34.858 "data_size": 63488 00:23:34.858 }, 00:23:34.858 { 00:23:34.858 "name": "BaseBdev4", 00:23:34.858 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:34.858 "is_configured": true, 00:23:34.858 "data_offset": 2048, 00:23:34.858 "data_size": 63488 00:23:34.858 } 00:23:34.858 ] 00:23:34.858 }' 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.858 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.118 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.377 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b1b3f46-9982-4798-8125-d9a17db8395f 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.378 [2024-10-07 07:44:34.751647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:35.378 NewBaseBdev 00:23:35.378 [2024-10-07 07:44:34.752175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:35.378 [2024-10-07 07:44:34.752208] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:35.378 [2024-10-07 07:44:34.752503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:35.378 [2024-10-07 07:44:34.752681] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:35.378 [2024-10-07 07:44:34.752692] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:35.378 [2024-10-07 07:44:34.752860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.378 [ 00:23:35.378 { 00:23:35.378 "name": "NewBaseBdev", 00:23:35.378 "aliases": [ 00:23:35.378 "3b1b3f46-9982-4798-8125-d9a17db8395f" 00:23:35.378 ], 00:23:35.378 "product_name": "Malloc disk", 00:23:35.378 "block_size": 512, 00:23:35.378 "num_blocks": 65536, 00:23:35.378 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:35.378 "assigned_rate_limits": { 00:23:35.378 "rw_ios_per_sec": 0, 00:23:35.378 "rw_mbytes_per_sec": 0, 00:23:35.378 "r_mbytes_per_sec": 0, 00:23:35.378 "w_mbytes_per_sec": 0 00:23:35.378 }, 00:23:35.378 "claimed": true, 00:23:35.378 "claim_type": "exclusive_write", 00:23:35.378 "zoned": false, 00:23:35.378 "supported_io_types": { 00:23:35.378 "read": true, 00:23:35.378 "write": true, 00:23:35.378 "unmap": true, 00:23:35.378 "flush": true, 00:23:35.378 "reset": true, 00:23:35.378 "nvme_admin": false, 00:23:35.378 "nvme_io": false, 00:23:35.378 "nvme_io_md": false, 00:23:35.378 "write_zeroes": true, 00:23:35.378 "zcopy": true, 00:23:35.378 "get_zone_info": false, 00:23:35.378 "zone_management": false, 00:23:35.378 "zone_append": false, 00:23:35.378 "compare": false, 00:23:35.378 "compare_and_write": false, 00:23:35.378 "abort": true, 00:23:35.378 "seek_hole": false, 00:23:35.378 "seek_data": false, 00:23:35.378 "copy": true, 00:23:35.378 "nvme_iov_md": false 00:23:35.378 }, 00:23:35.378 "memory_domains": [ 00:23:35.378 { 00:23:35.378 "dma_device_id": "system", 00:23:35.378 "dma_device_type": 1 00:23:35.378 }, 00:23:35.378 { 00:23:35.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.378 "dma_device_type": 2 00:23:35.378 } 00:23:35.378 ], 00:23:35.378 "driver_specific": {} 00:23:35.378 } 00:23:35.378 ] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.378 "name": "Existed_Raid", 00:23:35.378 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:35.378 "strip_size_kb": 0, 00:23:35.378 "state": "online", 00:23:35.378 "raid_level": "raid1", 00:23:35.378 "superblock": true, 00:23:35.378 "num_base_bdevs": 4, 00:23:35.378 "num_base_bdevs_discovered": 4, 00:23:35.378 "num_base_bdevs_operational": 4, 00:23:35.378 "base_bdevs_list": [ 00:23:35.378 { 00:23:35.378 "name": "NewBaseBdev", 00:23:35.378 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:35.378 "is_configured": true, 00:23:35.378 "data_offset": 2048, 00:23:35.378 "data_size": 63488 00:23:35.378 }, 00:23:35.378 { 00:23:35.378 "name": "BaseBdev2", 00:23:35.378 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:35.378 "is_configured": true, 00:23:35.378 "data_offset": 2048, 00:23:35.378 "data_size": 63488 00:23:35.378 }, 00:23:35.378 { 00:23:35.378 "name": "BaseBdev3", 00:23:35.378 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:35.378 "is_configured": true, 00:23:35.378 "data_offset": 2048, 00:23:35.378 "data_size": 63488 00:23:35.378 }, 00:23:35.378 { 00:23:35.378 "name": "BaseBdev4", 00:23:35.378 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:35.378 "is_configured": true, 00:23:35.378 "data_offset": 2048, 00:23:35.378 "data_size": 63488 00:23:35.378 } 00:23:35.378 ] 00:23:35.378 }' 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.378 07:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:35.947 [2024-10-07 07:44:35.244241] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.947 "name": "Existed_Raid", 00:23:35.947 "aliases": [ 00:23:35.947 "1a823f29-a097-4720-86bb-2385e7d75464" 00:23:35.947 ], 00:23:35.947 "product_name": "Raid Volume", 00:23:35.947 "block_size": 512, 00:23:35.947 "num_blocks": 63488, 00:23:35.947 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:35.947 "assigned_rate_limits": { 00:23:35.947 "rw_ios_per_sec": 0, 00:23:35.947 "rw_mbytes_per_sec": 0, 00:23:35.947 "r_mbytes_per_sec": 0, 00:23:35.947 "w_mbytes_per_sec": 0 00:23:35.947 }, 00:23:35.947 "claimed": false, 00:23:35.947 "zoned": false, 00:23:35.947 "supported_io_types": { 00:23:35.947 "read": true, 00:23:35.947 "write": true, 00:23:35.947 "unmap": false, 00:23:35.947 "flush": false, 00:23:35.947 "reset": true, 00:23:35.947 "nvme_admin": false, 00:23:35.947 "nvme_io": false, 00:23:35.947 "nvme_io_md": false, 00:23:35.947 "write_zeroes": true, 00:23:35.947 "zcopy": false, 00:23:35.947 "get_zone_info": false, 00:23:35.947 "zone_management": false, 00:23:35.947 "zone_append": false, 00:23:35.947 "compare": false, 00:23:35.947 "compare_and_write": false, 00:23:35.947 "abort": false, 00:23:35.947 "seek_hole": false, 00:23:35.947 "seek_data": false, 00:23:35.947 "copy": false, 00:23:35.947 "nvme_iov_md": false 00:23:35.947 }, 00:23:35.947 "memory_domains": [ 00:23:35.947 { 00:23:35.947 "dma_device_id": "system", 00:23:35.947 "dma_device_type": 1 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.947 "dma_device_type": 2 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "system", 00:23:35.947 "dma_device_type": 1 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.947 "dma_device_type": 2 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "system", 00:23:35.947 "dma_device_type": 1 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.947 "dma_device_type": 2 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "system", 00:23:35.947 "dma_device_type": 1 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.947 "dma_device_type": 2 00:23:35.947 } 00:23:35.947 ], 00:23:35.947 "driver_specific": { 00:23:35.947 "raid": { 00:23:35.947 "uuid": "1a823f29-a097-4720-86bb-2385e7d75464", 00:23:35.947 "strip_size_kb": 0, 00:23:35.947 "state": "online", 00:23:35.947 "raid_level": "raid1", 00:23:35.947 "superblock": true, 00:23:35.947 "num_base_bdevs": 4, 00:23:35.947 "num_base_bdevs_discovered": 4, 00:23:35.947 "num_base_bdevs_operational": 4, 00:23:35.947 "base_bdevs_list": [ 00:23:35.947 { 00:23:35.947 "name": "NewBaseBdev", 00:23:35.947 "uuid": "3b1b3f46-9982-4798-8125-d9a17db8395f", 00:23:35.947 "is_configured": true, 00:23:35.947 "data_offset": 2048, 00:23:35.947 "data_size": 63488 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "name": "BaseBdev2", 00:23:35.947 "uuid": "f9a627ce-e775-4255-906c-1cca2e0f41b6", 00:23:35.947 "is_configured": true, 00:23:35.947 "data_offset": 2048, 00:23:35.947 "data_size": 63488 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "name": "BaseBdev3", 00:23:35.947 "uuid": "4c4c0f47-e67c-43f1-a697-6320ac06ae43", 00:23:35.947 "is_configured": true, 00:23:35.947 "data_offset": 2048, 00:23:35.947 "data_size": 63488 00:23:35.947 }, 00:23:35.947 { 00:23:35.947 "name": "BaseBdev4", 00:23:35.947 "uuid": "ef5b23b2-66ca-4bd7-8d77-cda3e0bc3d62", 00:23:35.947 "is_configured": true, 00:23:35.947 "data_offset": 2048, 00:23:35.947 "data_size": 63488 00:23:35.947 } 00:23:35.947 ] 00:23:35.947 } 00:23:35.947 } 00:23:35.947 }' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:35.947 BaseBdev2 00:23:35.947 BaseBdev3 00:23:35.947 BaseBdev4' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:35.947 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.207 [2024-10-07 07:44:35.567949] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:36.207 [2024-10-07 07:44:35.568102] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.207 [2024-10-07 07:44:35.568295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.207 [2024-10-07 07:44:35.568764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.207 [2024-10-07 07:44:35.568927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74027 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 74027 ']' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 74027 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 74027 00:23:36.207 killing process with pid 74027 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 74027' 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 74027 00:23:36.207 [2024-10-07 07:44:35.611202] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:36.207 07:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 74027 00:23:36.777 [2024-10-07 07:44:36.047535] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.155 07:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:38.155 00:23:38.155 real 0m12.186s 00:23:38.155 user 0m19.323s 00:23:38.155 sys 0m2.165s 00:23:38.155 07:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:38.155 ************************************ 00:23:38.155 END TEST raid_state_function_test_sb 00:23:38.155 ************************************ 00:23:38.155 07:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.155 07:44:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:23:38.155 07:44:37 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:23:38.155 07:44:37 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:38.155 07:44:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.155 ************************************ 00:23:38.155 START TEST raid_superblock_test 00:23:38.155 ************************************ 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 4 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:38.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74705 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74705 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 74705 ']' 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:38.155 07:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.155 [2024-10-07 07:44:37.620316] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:38.156 [2024-10-07 07:44:37.620499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74705 ] 00:23:38.414 [2024-10-07 07:44:37.811109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.672 [2024-10-07 07:44:38.097230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.930 [2024-10-07 07:44:38.327281] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.930 [2024-10-07 07:44:38.327323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.188 malloc1 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.188 [2024-10-07 07:44:38.553978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:39.188 [2024-10-07 07:44:38.554185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.188 [2024-10-07 07:44:38.554257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:39.188 [2024-10-07 07:44:38.554418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.188 [2024-10-07 07:44:38.557162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.188 [2024-10-07 07:44:38.557328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:39.188 pt1 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.188 malloc2 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.188 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.188 [2024-10-07 07:44:38.624152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:39.189 [2024-10-07 07:44:38.624343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.189 [2024-10-07 07:44:38.624408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:39.189 [2024-10-07 07:44:38.624485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.189 [2024-10-07 07:44:38.627168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.189 pt2 00:23:39.189 [2024-10-07 07:44:38.627310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 malloc3 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 [2024-10-07 07:44:38.673009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:39.189 [2024-10-07 07:44:38.673182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.189 [2024-10-07 07:44:38.673247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:39.189 [2024-10-07 07:44:38.673330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.189 [2024-10-07 07:44:38.675882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.189 [2024-10-07 07:44:38.676019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:39.189 pt3 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 malloc4 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 [2024-10-07 07:44:38.732436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:39.189 [2024-10-07 07:44:38.732618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.189 [2024-10-07 07:44:38.732699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:39.189 [2024-10-07 07:44:38.732893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.189 [2024-10-07 07:44:38.735391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.189 [2024-10-07 07:44:38.735526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:39.189 pt4 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 [2024-10-07 07:44:38.740495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:39.189 [2024-10-07 07:44:38.742858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:39.189 [2024-10-07 07:44:38.743043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:39.189 [2024-10-07 07:44:38.743103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:39.189 [2024-10-07 07:44:38.743385] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:39.189 [2024-10-07 07:44:38.743404] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:39.189 [2024-10-07 07:44:38.743985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:39.189 [2024-10-07 07:44:38.744353] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:39.189 [2024-10-07 07:44:38.744516] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:39.189 [2024-10-07 07:44:38.744917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.189 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.447 "name": "raid_bdev1", 00:23:39.447 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:39.447 "strip_size_kb": 0, 00:23:39.447 "state": "online", 00:23:39.447 "raid_level": "raid1", 00:23:39.447 "superblock": true, 00:23:39.447 "num_base_bdevs": 4, 00:23:39.447 "num_base_bdevs_discovered": 4, 00:23:39.447 "num_base_bdevs_operational": 4, 00:23:39.447 "base_bdevs_list": [ 00:23:39.447 { 00:23:39.447 "name": "pt1", 00:23:39.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:39.447 "is_configured": true, 00:23:39.447 "data_offset": 2048, 00:23:39.447 "data_size": 63488 00:23:39.447 }, 00:23:39.447 { 00:23:39.447 "name": "pt2", 00:23:39.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:39.447 "is_configured": true, 00:23:39.447 "data_offset": 2048, 00:23:39.447 "data_size": 63488 00:23:39.447 }, 00:23:39.447 { 00:23:39.447 "name": "pt3", 00:23:39.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:39.447 "is_configured": true, 00:23:39.447 "data_offset": 2048, 00:23:39.447 "data_size": 63488 00:23:39.447 }, 00:23:39.447 { 00:23:39.447 "name": "pt4", 00:23:39.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:39.447 "is_configured": true, 00:23:39.447 "data_offset": 2048, 00:23:39.447 "data_size": 63488 00:23:39.447 } 00:23:39.447 ] 00:23:39.447 }' 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.447 07:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.706 [2024-10-07 07:44:39.173360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.706 "name": "raid_bdev1", 00:23:39.706 "aliases": [ 00:23:39.706 "dc240277-22a5-42a9-8db5-6b899a993c71" 00:23:39.706 ], 00:23:39.706 "product_name": "Raid Volume", 00:23:39.706 "block_size": 512, 00:23:39.706 "num_blocks": 63488, 00:23:39.706 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:39.706 "assigned_rate_limits": { 00:23:39.706 "rw_ios_per_sec": 0, 00:23:39.706 "rw_mbytes_per_sec": 0, 00:23:39.706 "r_mbytes_per_sec": 0, 00:23:39.706 "w_mbytes_per_sec": 0 00:23:39.706 }, 00:23:39.706 "claimed": false, 00:23:39.706 "zoned": false, 00:23:39.706 "supported_io_types": { 00:23:39.706 "read": true, 00:23:39.706 "write": true, 00:23:39.706 "unmap": false, 00:23:39.706 "flush": false, 00:23:39.706 "reset": true, 00:23:39.706 "nvme_admin": false, 00:23:39.706 "nvme_io": false, 00:23:39.706 "nvme_io_md": false, 00:23:39.706 "write_zeroes": true, 00:23:39.706 "zcopy": false, 00:23:39.706 "get_zone_info": false, 00:23:39.706 "zone_management": false, 00:23:39.706 "zone_append": false, 00:23:39.706 "compare": false, 00:23:39.706 "compare_and_write": false, 00:23:39.706 "abort": false, 00:23:39.706 "seek_hole": false, 00:23:39.706 "seek_data": false, 00:23:39.706 "copy": false, 00:23:39.706 "nvme_iov_md": false 00:23:39.706 }, 00:23:39.706 "memory_domains": [ 00:23:39.706 { 00:23:39.706 "dma_device_id": "system", 00:23:39.706 "dma_device_type": 1 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.706 "dma_device_type": 2 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "system", 00:23:39.706 "dma_device_type": 1 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.706 "dma_device_type": 2 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "system", 00:23:39.706 "dma_device_type": 1 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.706 "dma_device_type": 2 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "system", 00:23:39.706 "dma_device_type": 1 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.706 "dma_device_type": 2 00:23:39.706 } 00:23:39.706 ], 00:23:39.706 "driver_specific": { 00:23:39.706 "raid": { 00:23:39.706 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:39.706 "strip_size_kb": 0, 00:23:39.706 "state": "online", 00:23:39.706 "raid_level": "raid1", 00:23:39.706 "superblock": true, 00:23:39.706 "num_base_bdevs": 4, 00:23:39.706 "num_base_bdevs_discovered": 4, 00:23:39.706 "num_base_bdevs_operational": 4, 00:23:39.706 "base_bdevs_list": [ 00:23:39.706 { 00:23:39.706 "name": "pt1", 00:23:39.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:39.706 "is_configured": true, 00:23:39.706 "data_offset": 2048, 00:23:39.706 "data_size": 63488 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "name": "pt2", 00:23:39.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:39.706 "is_configured": true, 00:23:39.706 "data_offset": 2048, 00:23:39.706 "data_size": 63488 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "name": "pt3", 00:23:39.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:39.706 "is_configured": true, 00:23:39.706 "data_offset": 2048, 00:23:39.706 "data_size": 63488 00:23:39.706 }, 00:23:39.706 { 00:23:39.706 "name": "pt4", 00:23:39.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:39.706 "is_configured": true, 00:23:39.706 "data_offset": 2048, 00:23:39.706 "data_size": 63488 00:23:39.706 } 00:23:39.706 ] 00:23:39.706 } 00:23:39.706 } 00:23:39.706 }' 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:39.706 pt2 00:23:39.706 pt3 00:23:39.706 pt4' 00:23:39.706 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.965 [2024-10-07 07:44:39.473401] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc240277-22a5-42a9-8db5-6b899a993c71 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc240277-22a5-42a9-8db5-6b899a993c71 ']' 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:39.965 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.966 [2024-10-07 07:44:39.521067] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.966 [2024-10-07 07:44:39.521210] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.966 [2024-10-07 07:44:39.521387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.966 [2024-10-07 07:44:39.521568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.966 [2024-10-07 07:44:39.521729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.225 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.225 [2024-10-07 07:44:39.681130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:40.225 [2024-10-07 07:44:39.683661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:40.225 [2024-10-07 07:44:39.683844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:40.225 [2024-10-07 07:44:39.683895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:40.225 [2024-10-07 07:44:39.683951] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:40.225 [2024-10-07 07:44:39.684010] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:40.225 [2024-10-07 07:44:39.684035] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:40.225 [2024-10-07 07:44:39.684059] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:40.226 [2024-10-07 07:44:39.684077] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:40.226 [2024-10-07 07:44:39.684091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:40.226 request: 00:23:40.226 { 00:23:40.226 "name": "raid_bdev1", 00:23:40.226 "raid_level": "raid1", 00:23:40.226 "base_bdevs": [ 00:23:40.226 "malloc1", 00:23:40.226 "malloc2", 00:23:40.226 "malloc3", 00:23:40.226 "malloc4" 00:23:40.226 ], 00:23:40.226 "superblock": false, 00:23:40.226 "method": "bdev_raid_create", 00:23:40.226 "req_id": 1 00:23:40.226 } 00:23:40.226 Got JSON-RPC error response 00:23:40.226 response: 00:23:40.226 { 00:23:40.226 "code": -17, 00:23:40.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:40.226 } 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.226 [2024-10-07 07:44:39.737109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:40.226 [2024-10-07 07:44:39.737180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.226 [2024-10-07 07:44:39.737203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:40.226 [2024-10-07 07:44:39.737219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.226 [2024-10-07 07:44:39.740000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.226 [2024-10-07 07:44:39.740052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:40.226 [2024-10-07 07:44:39.740143] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:40.226 [2024-10-07 07:44:39.740213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:40.226 pt1 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.226 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.486 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.486 "name": "raid_bdev1", 00:23:40.486 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:40.486 "strip_size_kb": 0, 00:23:40.486 "state": "configuring", 00:23:40.486 "raid_level": "raid1", 00:23:40.486 "superblock": true, 00:23:40.486 "num_base_bdevs": 4, 00:23:40.486 "num_base_bdevs_discovered": 1, 00:23:40.486 "num_base_bdevs_operational": 4, 00:23:40.486 "base_bdevs_list": [ 00:23:40.486 { 00:23:40.486 "name": "pt1", 00:23:40.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:40.486 "is_configured": true, 00:23:40.486 "data_offset": 2048, 00:23:40.486 "data_size": 63488 00:23:40.486 }, 00:23:40.486 { 00:23:40.486 "name": null, 00:23:40.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:40.486 "is_configured": false, 00:23:40.486 "data_offset": 2048, 00:23:40.486 "data_size": 63488 00:23:40.486 }, 00:23:40.486 { 00:23:40.486 "name": null, 00:23:40.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:40.486 "is_configured": false, 00:23:40.486 "data_offset": 2048, 00:23:40.486 "data_size": 63488 00:23:40.486 }, 00:23:40.486 { 00:23:40.486 "name": null, 00:23:40.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:40.486 "is_configured": false, 00:23:40.486 "data_offset": 2048, 00:23:40.486 "data_size": 63488 00:23:40.486 } 00:23:40.486 ] 00:23:40.486 }' 00:23:40.486 07:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.486 07:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.747 [2024-10-07 07:44:40.229242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:40.747 [2024-10-07 07:44:40.229455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.747 [2024-10-07 07:44:40.229523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:40.747 [2024-10-07 07:44:40.229628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.747 [2024-10-07 07:44:40.230186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.747 [2024-10-07 07:44:40.230221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:40.747 [2024-10-07 07:44:40.230317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:40.747 [2024-10-07 07:44:40.230351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:40.747 pt2 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.747 [2024-10-07 07:44:40.237228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.747 "name": "raid_bdev1", 00:23:40.747 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:40.747 "strip_size_kb": 0, 00:23:40.747 "state": "configuring", 00:23:40.747 "raid_level": "raid1", 00:23:40.747 "superblock": true, 00:23:40.747 "num_base_bdevs": 4, 00:23:40.747 "num_base_bdevs_discovered": 1, 00:23:40.747 "num_base_bdevs_operational": 4, 00:23:40.747 "base_bdevs_list": [ 00:23:40.747 { 00:23:40.747 "name": "pt1", 00:23:40.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:40.747 "is_configured": true, 00:23:40.747 "data_offset": 2048, 00:23:40.747 "data_size": 63488 00:23:40.747 }, 00:23:40.747 { 00:23:40.747 "name": null, 00:23:40.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:40.747 "is_configured": false, 00:23:40.747 "data_offset": 0, 00:23:40.747 "data_size": 63488 00:23:40.747 }, 00:23:40.747 { 00:23:40.747 "name": null, 00:23:40.747 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:40.747 "is_configured": false, 00:23:40.747 "data_offset": 2048, 00:23:40.747 "data_size": 63488 00:23:40.747 }, 00:23:40.747 { 00:23:40.747 "name": null, 00:23:40.747 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:40.747 "is_configured": false, 00:23:40.747 "data_offset": 2048, 00:23:40.747 "data_size": 63488 00:23:40.747 } 00:23:40.747 ] 00:23:40.747 }' 00:23:40.747 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.748 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.317 [2024-10-07 07:44:40.689403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:41.317 [2024-10-07 07:44:40.689684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.317 [2024-10-07 07:44:40.689943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:41.317 [2024-10-07 07:44:40.689975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.317 [2024-10-07 07:44:40.690646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.317 [2024-10-07 07:44:40.690685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:41.317 [2024-10-07 07:44:40.690835] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:41.317 [2024-10-07 07:44:40.690878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:41.317 pt2 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.317 [2024-10-07 07:44:40.697390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:41.317 [2024-10-07 07:44:40.697630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.317 [2024-10-07 07:44:40.697807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:41.317 [2024-10-07 07:44:40.697962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.317 [2024-10-07 07:44:40.698591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.317 [2024-10-07 07:44:40.698807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:41.317 [2024-10-07 07:44:40.699042] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:41.317 [2024-10-07 07:44:40.699199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:41.317 pt3 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.317 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.317 [2024-10-07 07:44:40.705351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:41.317 [2024-10-07 07:44:40.705557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.317 [2024-10-07 07:44:40.705736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:41.317 [2024-10-07 07:44:40.705869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.317 [2024-10-07 07:44:40.706424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.317 [2024-10-07 07:44:40.706467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:41.317 [2024-10-07 07:44:40.706558] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:41.318 [2024-10-07 07:44:40.706607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:41.318 [2024-10-07 07:44:40.706874] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:41.318 [2024-10-07 07:44:40.706891] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:41.318 [2024-10-07 07:44:40.707272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:41.318 [2024-10-07 07:44:40.707532] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:41.318 [2024-10-07 07:44:40.707553] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:41.318 [2024-10-07 07:44:40.707792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.318 pt4 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.318 "name": "raid_bdev1", 00:23:41.318 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:41.318 "strip_size_kb": 0, 00:23:41.318 "state": "online", 00:23:41.318 "raid_level": "raid1", 00:23:41.318 "superblock": true, 00:23:41.318 "num_base_bdevs": 4, 00:23:41.318 "num_base_bdevs_discovered": 4, 00:23:41.318 "num_base_bdevs_operational": 4, 00:23:41.318 "base_bdevs_list": [ 00:23:41.318 { 00:23:41.318 "name": "pt1", 00:23:41.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:41.318 "is_configured": true, 00:23:41.318 "data_offset": 2048, 00:23:41.318 "data_size": 63488 00:23:41.318 }, 00:23:41.318 { 00:23:41.318 "name": "pt2", 00:23:41.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:41.318 "is_configured": true, 00:23:41.318 "data_offset": 2048, 00:23:41.318 "data_size": 63488 00:23:41.318 }, 00:23:41.318 { 00:23:41.318 "name": "pt3", 00:23:41.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:41.318 "is_configured": true, 00:23:41.318 "data_offset": 2048, 00:23:41.318 "data_size": 63488 00:23:41.318 }, 00:23:41.318 { 00:23:41.318 "name": "pt4", 00:23:41.318 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:41.318 "is_configured": true, 00:23:41.318 "data_offset": 2048, 00:23:41.318 "data_size": 63488 00:23:41.318 } 00:23:41.318 ] 00:23:41.318 }' 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.318 07:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.907 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:41.907 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 [2024-10-07 07:44:41.209885] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.908 "name": "raid_bdev1", 00:23:41.908 "aliases": [ 00:23:41.908 "dc240277-22a5-42a9-8db5-6b899a993c71" 00:23:41.908 ], 00:23:41.908 "product_name": "Raid Volume", 00:23:41.908 "block_size": 512, 00:23:41.908 "num_blocks": 63488, 00:23:41.908 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:41.908 "assigned_rate_limits": { 00:23:41.908 "rw_ios_per_sec": 0, 00:23:41.908 "rw_mbytes_per_sec": 0, 00:23:41.908 "r_mbytes_per_sec": 0, 00:23:41.908 "w_mbytes_per_sec": 0 00:23:41.908 }, 00:23:41.908 "claimed": false, 00:23:41.908 "zoned": false, 00:23:41.908 "supported_io_types": { 00:23:41.908 "read": true, 00:23:41.908 "write": true, 00:23:41.908 "unmap": false, 00:23:41.908 "flush": false, 00:23:41.908 "reset": true, 00:23:41.908 "nvme_admin": false, 00:23:41.908 "nvme_io": false, 00:23:41.908 "nvme_io_md": false, 00:23:41.908 "write_zeroes": true, 00:23:41.908 "zcopy": false, 00:23:41.908 "get_zone_info": false, 00:23:41.908 "zone_management": false, 00:23:41.908 "zone_append": false, 00:23:41.908 "compare": false, 00:23:41.908 "compare_and_write": false, 00:23:41.908 "abort": false, 00:23:41.908 "seek_hole": false, 00:23:41.908 "seek_data": false, 00:23:41.908 "copy": false, 00:23:41.908 "nvme_iov_md": false 00:23:41.908 }, 00:23:41.908 "memory_domains": [ 00:23:41.908 { 00:23:41.908 "dma_device_id": "system", 00:23:41.908 "dma_device_type": 1 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.908 "dma_device_type": 2 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "system", 00:23:41.908 "dma_device_type": 1 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.908 "dma_device_type": 2 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "system", 00:23:41.908 "dma_device_type": 1 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.908 "dma_device_type": 2 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "system", 00:23:41.908 "dma_device_type": 1 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.908 "dma_device_type": 2 00:23:41.908 } 00:23:41.908 ], 00:23:41.908 "driver_specific": { 00:23:41.908 "raid": { 00:23:41.908 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:41.908 "strip_size_kb": 0, 00:23:41.908 "state": "online", 00:23:41.908 "raid_level": "raid1", 00:23:41.908 "superblock": true, 00:23:41.908 "num_base_bdevs": 4, 00:23:41.908 "num_base_bdevs_discovered": 4, 00:23:41.908 "num_base_bdevs_operational": 4, 00:23:41.908 "base_bdevs_list": [ 00:23:41.908 { 00:23:41.908 "name": "pt1", 00:23:41.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:41.908 "is_configured": true, 00:23:41.908 "data_offset": 2048, 00:23:41.908 "data_size": 63488 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "name": "pt2", 00:23:41.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:41.908 "is_configured": true, 00:23:41.908 "data_offset": 2048, 00:23:41.908 "data_size": 63488 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "name": "pt3", 00:23:41.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:41.908 "is_configured": true, 00:23:41.908 "data_offset": 2048, 00:23:41.908 "data_size": 63488 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "name": "pt4", 00:23:41.908 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:41.908 "is_configured": true, 00:23:41.908 "data_offset": 2048, 00:23:41.908 "data_size": 63488 00:23:41.908 } 00:23:41.908 ] 00:23:41.908 } 00:23:41.908 } 00:23:41.908 }' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:41.908 pt2 00:23:41.908 pt3 00:23:41.908 pt4' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:42.167 [2024-10-07 07:44:41.521934] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc240277-22a5-42a9-8db5-6b899a993c71 '!=' dc240277-22a5-42a9-8db5-6b899a993c71 ']' 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.167 [2024-10-07 07:44:41.565633] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.167 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.167 "name": "raid_bdev1", 00:23:42.167 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:42.167 "strip_size_kb": 0, 00:23:42.167 "state": "online", 00:23:42.167 "raid_level": "raid1", 00:23:42.167 "superblock": true, 00:23:42.167 "num_base_bdevs": 4, 00:23:42.167 "num_base_bdevs_discovered": 3, 00:23:42.167 "num_base_bdevs_operational": 3, 00:23:42.167 "base_bdevs_list": [ 00:23:42.167 { 00:23:42.168 "name": null, 00:23:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.168 "is_configured": false, 00:23:42.168 "data_offset": 0, 00:23:42.168 "data_size": 63488 00:23:42.168 }, 00:23:42.168 { 00:23:42.168 "name": "pt2", 00:23:42.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:42.168 "is_configured": true, 00:23:42.168 "data_offset": 2048, 00:23:42.168 "data_size": 63488 00:23:42.168 }, 00:23:42.168 { 00:23:42.168 "name": "pt3", 00:23:42.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:42.168 "is_configured": true, 00:23:42.168 "data_offset": 2048, 00:23:42.168 "data_size": 63488 00:23:42.168 }, 00:23:42.168 { 00:23:42.168 "name": "pt4", 00:23:42.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:42.168 "is_configured": true, 00:23:42.168 "data_offset": 2048, 00:23:42.168 "data_size": 63488 00:23:42.168 } 00:23:42.168 ] 00:23:42.168 }' 00:23:42.168 07:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.168 07:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.735 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:42.735 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 [2024-10-07 07:44:42.005688] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.736 [2024-10-07 07:44:42.005740] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.736 [2024-10-07 07:44:42.005834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.736 [2024-10-07 07:44:42.005925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.736 [2024-10-07 07:44:42.005938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 [2024-10-07 07:44:42.089698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:42.736 [2024-10-07 07:44:42.089883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.736 [2024-10-07 07:44:42.089947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:42.736 [2024-10-07 07:44:42.090030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.736 [2024-10-07 07:44:42.092866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.736 [2024-10-07 07:44:42.093024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:42.736 [2024-10-07 07:44:42.093139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:42.736 [2024-10-07 07:44:42.093193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:42.736 pt2 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.736 "name": "raid_bdev1", 00:23:42.736 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:42.736 "strip_size_kb": 0, 00:23:42.736 "state": "configuring", 00:23:42.736 "raid_level": "raid1", 00:23:42.736 "superblock": true, 00:23:42.736 "num_base_bdevs": 4, 00:23:42.736 "num_base_bdevs_discovered": 1, 00:23:42.736 "num_base_bdevs_operational": 3, 00:23:42.736 "base_bdevs_list": [ 00:23:42.736 { 00:23:42.736 "name": null, 00:23:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.736 "is_configured": false, 00:23:42.736 "data_offset": 2048, 00:23:42.736 "data_size": 63488 00:23:42.736 }, 00:23:42.736 { 00:23:42.736 "name": "pt2", 00:23:42.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:42.736 "is_configured": true, 00:23:42.736 "data_offset": 2048, 00:23:42.736 "data_size": 63488 00:23:42.736 }, 00:23:42.736 { 00:23:42.736 "name": null, 00:23:42.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:42.736 "is_configured": false, 00:23:42.736 "data_offset": 2048, 00:23:42.736 "data_size": 63488 00:23:42.736 }, 00:23:42.736 { 00:23:42.736 "name": null, 00:23:42.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:42.736 "is_configured": false, 00:23:42.736 "data_offset": 2048, 00:23:42.736 "data_size": 63488 00:23:42.736 } 00:23:42.736 ] 00:23:42.736 }' 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.736 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.019 [2024-10-07 07:44:42.521874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:43.019 [2024-10-07 07:44:42.522984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.019 [2024-10-07 07:44:42.523031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:43.019 [2024-10-07 07:44:42.523045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.019 [2024-10-07 07:44:42.523577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.019 [2024-10-07 07:44:42.523607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:43.019 [2024-10-07 07:44:42.523724] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:43.019 [2024-10-07 07:44:42.523759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:43.019 pt3 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.019 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.020 "name": "raid_bdev1", 00:23:43.020 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:43.020 "strip_size_kb": 0, 00:23:43.020 "state": "configuring", 00:23:43.020 "raid_level": "raid1", 00:23:43.020 "superblock": true, 00:23:43.020 "num_base_bdevs": 4, 00:23:43.020 "num_base_bdevs_discovered": 2, 00:23:43.020 "num_base_bdevs_operational": 3, 00:23:43.020 "base_bdevs_list": [ 00:23:43.020 { 00:23:43.020 "name": null, 00:23:43.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.020 "is_configured": false, 00:23:43.020 "data_offset": 2048, 00:23:43.020 "data_size": 63488 00:23:43.020 }, 00:23:43.020 { 00:23:43.020 "name": "pt2", 00:23:43.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:43.020 "is_configured": true, 00:23:43.020 "data_offset": 2048, 00:23:43.020 "data_size": 63488 00:23:43.020 }, 00:23:43.020 { 00:23:43.020 "name": "pt3", 00:23:43.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:43.020 "is_configured": true, 00:23:43.020 "data_offset": 2048, 00:23:43.020 "data_size": 63488 00:23:43.020 }, 00:23:43.020 { 00:23:43.020 "name": null, 00:23:43.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:43.020 "is_configured": false, 00:23:43.020 "data_offset": 2048, 00:23:43.020 "data_size": 63488 00:23:43.020 } 00:23:43.020 ] 00:23:43.020 }' 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.020 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.585 [2024-10-07 07:44:42.977992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:43.585 [2024-10-07 07:44:42.978231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.585 [2024-10-07 07:44:42.978306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:43.585 [2024-10-07 07:44:42.978511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.585 [2024-10-07 07:44:42.979123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.585 [2024-10-07 07:44:42.979162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:43.585 [2024-10-07 07:44:42.979261] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:43.585 [2024-10-07 07:44:42.979295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:43.585 [2024-10-07 07:44:42.979440] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:43.585 [2024-10-07 07:44:42.979451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:43.585 [2024-10-07 07:44:42.979769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:43.585 [2024-10-07 07:44:42.979940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:43.585 [2024-10-07 07:44:42.979957] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:43.585 [2024-10-07 07:44:42.980133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.585 pt4 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:43.585 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.586 07:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.586 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:43.586 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.586 "name": "raid_bdev1", 00:23:43.586 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:43.586 "strip_size_kb": 0, 00:23:43.586 "state": "online", 00:23:43.586 "raid_level": "raid1", 00:23:43.586 "superblock": true, 00:23:43.586 "num_base_bdevs": 4, 00:23:43.586 "num_base_bdevs_discovered": 3, 00:23:43.586 "num_base_bdevs_operational": 3, 00:23:43.586 "base_bdevs_list": [ 00:23:43.586 { 00:23:43.586 "name": null, 00:23:43.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.586 "is_configured": false, 00:23:43.586 "data_offset": 2048, 00:23:43.586 "data_size": 63488 00:23:43.586 }, 00:23:43.586 { 00:23:43.586 "name": "pt2", 00:23:43.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:43.586 "is_configured": true, 00:23:43.586 "data_offset": 2048, 00:23:43.586 "data_size": 63488 00:23:43.586 }, 00:23:43.586 { 00:23:43.586 "name": "pt3", 00:23:43.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:43.586 "is_configured": true, 00:23:43.586 "data_offset": 2048, 00:23:43.586 "data_size": 63488 00:23:43.586 }, 00:23:43.586 { 00:23:43.586 "name": "pt4", 00:23:43.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:43.586 "is_configured": true, 00:23:43.586 "data_offset": 2048, 00:23:43.586 "data_size": 63488 00:23:43.586 } 00:23:43.586 ] 00:23:43.586 }' 00:23:43.586 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.586 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.153 [2024-10-07 07:44:43.430102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.153 [2024-10-07 07:44:43.430321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.153 [2024-10-07 07:44:43.430566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.153 [2024-10-07 07:44:43.430837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.153 [2024-10-07 07:44:43.431003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.153 [2024-10-07 07:44:43.494143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:44.153 [2024-10-07 07:44:43.494247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.153 [2024-10-07 07:44:43.494287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:44.153 [2024-10-07 07:44:43.494311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.153 [2024-10-07 07:44:43.497682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.153 [2024-10-07 07:44:43.497764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:44.153 [2024-10-07 07:44:43.497889] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:44.153 [2024-10-07 07:44:43.497966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:44.153 pt1 00:23:44.153 [2024-10-07 07:44:43.498145] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:44.153 [2024-10-07 07:44:43.498184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.153 [2024-10-07 07:44:43.498215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:44.153 [2024-10-07 07:44:43.498311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:44.153 [2024-10-07 07:44:43.498533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.153 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.154 "name": "raid_bdev1", 00:23:44.154 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:44.154 "strip_size_kb": 0, 00:23:44.154 "state": "configuring", 00:23:44.154 "raid_level": "raid1", 00:23:44.154 "superblock": true, 00:23:44.154 "num_base_bdevs": 4, 00:23:44.154 "num_base_bdevs_discovered": 2, 00:23:44.154 "num_base_bdevs_operational": 3, 00:23:44.154 "base_bdevs_list": [ 00:23:44.154 { 00:23:44.154 "name": null, 00:23:44.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.154 "is_configured": false, 00:23:44.154 "data_offset": 2048, 00:23:44.154 "data_size": 63488 00:23:44.154 }, 00:23:44.154 { 00:23:44.154 "name": "pt2", 00:23:44.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:44.154 "is_configured": true, 00:23:44.154 "data_offset": 2048, 00:23:44.154 "data_size": 63488 00:23:44.154 }, 00:23:44.154 { 00:23:44.154 "name": "pt3", 00:23:44.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:44.154 "is_configured": true, 00:23:44.154 "data_offset": 2048, 00:23:44.154 "data_size": 63488 00:23:44.154 }, 00:23:44.154 { 00:23:44.154 "name": null, 00:23:44.154 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:44.154 "is_configured": false, 00:23:44.154 "data_offset": 2048, 00:23:44.154 "data_size": 63488 00:23:44.154 } 00:23:44.154 ] 00:23:44.154 }' 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.154 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.413 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:44.413 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.413 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.413 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:44.671 07:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.671 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:44.671 07:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.671 [2024-10-07 07:44:44.006416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:44.671 [2024-10-07 07:44:44.006625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.671 [2024-10-07 07:44:44.006669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:44.671 [2024-10-07 07:44:44.006683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.671 [2024-10-07 07:44:44.007239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.671 [2024-10-07 07:44:44.007272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:44.671 [2024-10-07 07:44:44.007373] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:44.671 [2024-10-07 07:44:44.007400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:44.671 [2024-10-07 07:44:44.007564] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:44.671 [2024-10-07 07:44:44.007575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.671 [2024-10-07 07:44:44.007885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:44.671 [2024-10-07 07:44:44.008050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:44.671 [2024-10-07 07:44:44.008066] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:44.671 [2024-10-07 07:44:44.008219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.671 pt4 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.671 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.671 "name": "raid_bdev1", 00:23:44.671 "uuid": "dc240277-22a5-42a9-8db5-6b899a993c71", 00:23:44.671 "strip_size_kb": 0, 00:23:44.671 "state": "online", 00:23:44.671 "raid_level": "raid1", 00:23:44.671 "superblock": true, 00:23:44.671 "num_base_bdevs": 4, 00:23:44.671 "num_base_bdevs_discovered": 3, 00:23:44.671 "num_base_bdevs_operational": 3, 00:23:44.671 "base_bdevs_list": [ 00:23:44.671 { 00:23:44.671 "name": null, 00:23:44.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.671 "is_configured": false, 00:23:44.671 "data_offset": 2048, 00:23:44.671 "data_size": 63488 00:23:44.671 }, 00:23:44.671 { 00:23:44.671 "name": "pt2", 00:23:44.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:44.671 "is_configured": true, 00:23:44.671 "data_offset": 2048, 00:23:44.671 "data_size": 63488 00:23:44.671 }, 00:23:44.671 { 00:23:44.671 "name": "pt3", 00:23:44.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:44.671 "is_configured": true, 00:23:44.671 "data_offset": 2048, 00:23:44.671 "data_size": 63488 00:23:44.671 }, 00:23:44.671 { 00:23:44.671 "name": "pt4", 00:23:44.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:44.672 "is_configured": true, 00:23:44.672 "data_offset": 2048, 00:23:44.672 "data_size": 63488 00:23:44.672 } 00:23:44.672 ] 00:23:44.672 }' 00:23:44.672 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.672 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:44.929 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:44.930 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:44.930 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:44.930 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.930 [2024-10-07 07:44:44.478827] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' dc240277-22a5-42a9-8db5-6b899a993c71 '!=' dc240277-22a5-42a9-8db5-6b899a993c71 ']' 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74705 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 74705 ']' 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # kill -0 74705 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # uname 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 74705 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:45.188 killing process with pid 74705 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 74705' 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # kill 74705 00:23:45.188 07:44:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@977 -- # wait 74705 00:23:45.188 [2024-10-07 07:44:44.554363] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:45.188 [2024-10-07 07:44:44.554482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:45.188 [2024-10-07 07:44:44.554566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:45.188 [2024-10-07 07:44:44.554582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:45.446 [2024-10-07 07:44:44.990519] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.829 07:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:46.829 00:23:46.829 real 0m8.865s 00:23:46.829 user 0m13.830s 00:23:46.829 sys 0m1.658s 00:23:46.830 07:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:46.830 ************************************ 00:23:46.830 END TEST raid_superblock_test 00:23:46.830 07:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.830 ************************************ 00:23:47.089 07:44:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:23:47.089 07:44:46 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:47.089 07:44:46 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:47.089 07:44:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:47.089 ************************************ 00:23:47.089 START TEST raid_read_error_test 00:23:47.089 ************************************ 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 4 read 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.n2UYeAtSJ3 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75199 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75199 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # '[' -z 75199 ']' 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:47.089 07:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.089 [2024-10-07 07:44:46.550888] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:47.089 [2024-10-07 07:44:46.551035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75199 ] 00:23:47.349 [2024-10-07 07:44:46.720824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.608 [2024-10-07 07:44:46.992613] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.866 [2024-10-07 07:44:47.227013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.866 [2024-10-07 07:44:47.227053] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@867 -- # return 0 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.125 BaseBdev1_malloc 00:23:48.125 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 true 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 [2024-10-07 07:44:47.525075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:48.126 [2024-10-07 07:44:47.525151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.126 [2024-10-07 07:44:47.525177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:48.126 [2024-10-07 07:44:47.525194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.126 [2024-10-07 07:44:47.527963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.126 [2024-10-07 07:44:47.528012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:48.126 BaseBdev1 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 BaseBdev2_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 true 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 [2024-10-07 07:44:47.599938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:48.126 [2024-10-07 07:44:47.600004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.126 [2024-10-07 07:44:47.600027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:48.126 [2024-10-07 07:44:47.600043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.126 [2024-10-07 07:44:47.602741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.126 [2024-10-07 07:44:47.602787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:48.126 BaseBdev2 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 BaseBdev3_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 true 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.126 [2024-10-07 07:44:47.663391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:48.126 [2024-10-07 07:44:47.663455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.126 [2024-10-07 07:44:47.663480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:48.126 [2024-10-07 07:44:47.663496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.126 [2024-10-07 07:44:47.666291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.126 [2024-10-07 07:44:47.666340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:48.126 BaseBdev3 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.126 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.385 BaseBdev4_malloc 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.385 true 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.385 [2024-10-07 07:44:47.728740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:48.385 [2024-10-07 07:44:47.728810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.385 [2024-10-07 07:44:47.728838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:48.385 [2024-10-07 07:44:47.728857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.385 [2024-10-07 07:44:47.731656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.385 [2024-10-07 07:44:47.731721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:48.385 BaseBdev4 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.385 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.385 [2024-10-07 07:44:47.736843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.385 [2024-10-07 07:44:47.739303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.385 [2024-10-07 07:44:47.739396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:48.385 [2024-10-07 07:44:47.739467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:48.385 [2024-10-07 07:44:47.739743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:48.385 [2024-10-07 07:44:47.739768] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:48.385 [2024-10-07 07:44:47.740087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:48.385 [2024-10-07 07:44:47.740289] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:48.386 [2024-10-07 07:44:47.740309] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:48.386 [2024-10-07 07:44:47.740499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.386 "name": "raid_bdev1", 00:23:48.386 "uuid": "c82d22b5-bec7-4f9d-bcd4-efa0104822c8", 00:23:48.386 "strip_size_kb": 0, 00:23:48.386 "state": "online", 00:23:48.386 "raid_level": "raid1", 00:23:48.386 "superblock": true, 00:23:48.386 "num_base_bdevs": 4, 00:23:48.386 "num_base_bdevs_discovered": 4, 00:23:48.386 "num_base_bdevs_operational": 4, 00:23:48.386 "base_bdevs_list": [ 00:23:48.386 { 00:23:48.386 "name": "BaseBdev1", 00:23:48.386 "uuid": "06a9bc7a-e7a1-55b2-9860-d4c7abb95f2b", 00:23:48.386 "is_configured": true, 00:23:48.386 "data_offset": 2048, 00:23:48.386 "data_size": 63488 00:23:48.386 }, 00:23:48.386 { 00:23:48.386 "name": "BaseBdev2", 00:23:48.386 "uuid": "0b9880cb-4e79-5350-a091-1dcc089c7590", 00:23:48.386 "is_configured": true, 00:23:48.386 "data_offset": 2048, 00:23:48.386 "data_size": 63488 00:23:48.386 }, 00:23:48.386 { 00:23:48.386 "name": "BaseBdev3", 00:23:48.386 "uuid": "f11c2eaf-b6ae-597b-a61d-b1ca5c6311f0", 00:23:48.386 "is_configured": true, 00:23:48.386 "data_offset": 2048, 00:23:48.386 "data_size": 63488 00:23:48.386 }, 00:23:48.386 { 00:23:48.386 "name": "BaseBdev4", 00:23:48.386 "uuid": "3e871d41-0ea9-56cf-bdeb-b8558883487e", 00:23:48.386 "is_configured": true, 00:23:48.386 "data_offset": 2048, 00:23:48.386 "data_size": 63488 00:23:48.386 } 00:23:48.386 ] 00:23:48.386 }' 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.386 07:44:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.644 07:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:48.644 07:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:48.903 [2024-10-07 07:44:48.282583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:49.837 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.838 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.838 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:49.838 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.838 "name": "raid_bdev1", 00:23:49.838 "uuid": "c82d22b5-bec7-4f9d-bcd4-efa0104822c8", 00:23:49.838 "strip_size_kb": 0, 00:23:49.838 "state": "online", 00:23:49.838 "raid_level": "raid1", 00:23:49.838 "superblock": true, 00:23:49.838 "num_base_bdevs": 4, 00:23:49.838 "num_base_bdevs_discovered": 4, 00:23:49.838 "num_base_bdevs_operational": 4, 00:23:49.838 "base_bdevs_list": [ 00:23:49.838 { 00:23:49.838 "name": "BaseBdev1", 00:23:49.838 "uuid": "06a9bc7a-e7a1-55b2-9860-d4c7abb95f2b", 00:23:49.838 "is_configured": true, 00:23:49.838 "data_offset": 2048, 00:23:49.838 "data_size": 63488 00:23:49.838 }, 00:23:49.838 { 00:23:49.838 "name": "BaseBdev2", 00:23:49.838 "uuid": "0b9880cb-4e79-5350-a091-1dcc089c7590", 00:23:49.838 "is_configured": true, 00:23:49.838 "data_offset": 2048, 00:23:49.838 "data_size": 63488 00:23:49.838 }, 00:23:49.838 { 00:23:49.838 "name": "BaseBdev3", 00:23:49.838 "uuid": "f11c2eaf-b6ae-597b-a61d-b1ca5c6311f0", 00:23:49.838 "is_configured": true, 00:23:49.838 "data_offset": 2048, 00:23:49.838 "data_size": 63488 00:23:49.838 }, 00:23:49.838 { 00:23:49.838 "name": "BaseBdev4", 00:23:49.838 "uuid": "3e871d41-0ea9-56cf-bdeb-b8558883487e", 00:23:49.838 "is_configured": true, 00:23:49.838 "data_offset": 2048, 00:23:49.838 "data_size": 63488 00:23:49.838 } 00:23:49.838 ] 00:23:49.838 }' 00:23:49.838 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.838 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.096 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:50.096 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:50.096 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.096 [2024-10-07 07:44:49.630623] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:50.096 [2024-10-07 07:44:49.630664] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:50.096 [2024-10-07 07:44:49.633735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:50.096 [2024-10-07 07:44:49.633798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.096 [2024-10-07 07:44:49.633943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:50.096 [2024-10-07 07:44:49.633966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:50.096 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:50.096 { 00:23:50.096 "results": [ 00:23:50.096 { 00:23:50.096 "job": "raid_bdev1", 00:23:50.096 "core_mask": "0x1", 00:23:50.096 "workload": "randrw", 00:23:50.096 "percentage": 50, 00:23:50.096 "status": "finished", 00:23:50.096 "queue_depth": 1, 00:23:50.096 "io_size": 131072, 00:23:50.096 "runtime": 1.345697, 00:23:50.097 "iops": 9898.959424001094, 00:23:50.097 "mibps": 1237.3699280001367, 00:23:50.097 "io_failed": 0, 00:23:50.097 "io_timeout": 0, 00:23:50.097 "avg_latency_us": 97.92810578356409, 00:23:50.097 "min_latency_us": 24.624761904761904, 00:23:50.097 "max_latency_us": 1778.8342857142857 00:23:50.097 } 00:23:50.097 ], 00:23:50.097 "core_count": 1 00:23:50.097 } 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75199 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' -z 75199 ']' 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # kill -0 75199 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # uname 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:50.097 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 75199 00:23:50.355 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:50.355 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:50.355 killing process with pid 75199 00:23:50.355 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 75199' 00:23:50.355 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # kill 75199 00:23:50.355 [2024-10-07 07:44:49.676434] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:50.355 07:44:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@977 -- # wait 75199 00:23:50.613 [2024-10-07 07:44:50.048453] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.n2UYeAtSJ3 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:51.993 00:23:51.993 real 0m5.107s 00:23:51.993 user 0m5.959s 00:23:51.993 sys 0m0.660s 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:51.993 07:44:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.993 ************************************ 00:23:51.993 END TEST raid_read_error_test 00:23:51.993 ************************************ 00:23:52.252 07:44:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:23:52.252 07:44:51 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:23:52.252 07:44:51 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:52.252 07:44:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:52.252 ************************************ 00:23:52.252 START TEST raid_write_error_test 00:23:52.252 ************************************ 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # raid_io_error_test raid1 4 write 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eQV71lmoVj 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75353 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75353 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # '[' -z 75353 ']' 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.252 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:52.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.253 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.253 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:52.253 07:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.253 07:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:52.253 [2024-10-07 07:44:51.743238] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:52.253 [2024-10-07 07:44:51.743457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75353 ] 00:23:52.512 [2024-10-07 07:44:51.953070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.771 [2024-10-07 07:44:52.188555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.030 [2024-10-07 07:44:52.417629] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.030 [2024-10-07 07:44:52.417673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@867 -- # return 0 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 BaseBdev1_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 true 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 [2024-10-07 07:44:52.762632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:53.290 [2024-10-07 07:44:52.762693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.290 [2024-10-07 07:44:52.762729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:53.290 [2024-10-07 07:44:52.762746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.290 [2024-10-07 07:44:52.765356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.290 [2024-10-07 07:44:52.765401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:53.290 BaseBdev1 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 BaseBdev2_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 true 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.290 [2024-10-07 07:44:52.830059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:53.290 [2024-10-07 07:44:52.830122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.290 [2024-10-07 07:44:52.830145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:53.290 [2024-10-07 07:44:52.830160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.290 [2024-10-07 07:44:52.832835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.290 [2024-10-07 07:44:52.832883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:53.290 BaseBdev2 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.290 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 BaseBdev3_malloc 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 true 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 [2024-10-07 07:44:52.890473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:53.550 [2024-10-07 07:44:52.890534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.550 [2024-10-07 07:44:52.890556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:53.550 [2024-10-07 07:44:52.890571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.550 [2024-10-07 07:44:52.893195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.550 [2024-10-07 07:44:52.893242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:53.550 BaseBdev3 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 BaseBdev4_malloc 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 true 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 [2024-10-07 07:44:52.950322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:53.550 [2024-10-07 07:44:52.950382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.550 [2024-10-07 07:44:52.950404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:53.550 [2024-10-07 07:44:52.950420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.550 [2024-10-07 07:44:52.952966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.550 [2024-10-07 07:44:52.953014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:53.550 BaseBdev4 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.550 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.550 [2024-10-07 07:44:52.958412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.550 [2024-10-07 07:44:52.960658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:53.550 [2024-10-07 07:44:52.960755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:53.550 [2024-10-07 07:44:52.960822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:53.550 [2024-10-07 07:44:52.961051] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:53.550 [2024-10-07 07:44:52.961075] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:53.550 [2024-10-07 07:44:52.961371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:53.551 [2024-10-07 07:44:52.961568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:53.551 [2024-10-07 07:44:52.961588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:53.551 [2024-10-07 07:44:52.961778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.551 "name": "raid_bdev1", 00:23:53.551 "uuid": "c998843e-79da-4422-a041-03282c0ab78e", 00:23:53.551 "strip_size_kb": 0, 00:23:53.551 "state": "online", 00:23:53.551 "raid_level": "raid1", 00:23:53.551 "superblock": true, 00:23:53.551 "num_base_bdevs": 4, 00:23:53.551 "num_base_bdevs_discovered": 4, 00:23:53.551 "num_base_bdevs_operational": 4, 00:23:53.551 "base_bdevs_list": [ 00:23:53.551 { 00:23:53.551 "name": "BaseBdev1", 00:23:53.551 "uuid": "dd9427e7-fd01-5343-b879-b8816728eaa3", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.551 "data_size": 63488 00:23:53.551 }, 00:23:53.551 { 00:23:53.551 "name": "BaseBdev2", 00:23:53.551 "uuid": "e4b7ec23-ff14-5e9f-ab25-f621b2ff3fb6", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.551 "data_size": 63488 00:23:53.551 }, 00:23:53.551 { 00:23:53.551 "name": "BaseBdev3", 00:23:53.551 "uuid": "ac399900-c567-5977-ab71-847f1eeb1ab0", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.551 "data_size": 63488 00:23:53.551 }, 00:23:53.551 { 00:23:53.551 "name": "BaseBdev4", 00:23:53.551 "uuid": "3ec84215-5b92-5ef5-8264-c397329a0513", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.551 "data_size": 63488 00:23:53.551 } 00:23:53.551 ] 00:23:53.551 }' 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.551 07:44:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.120 07:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:54.120 07:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:54.120 [2024-10-07 07:44:53.504087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.058 [2024-10-07 07:44:54.388286] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:55.058 [2024-10-07 07:44:54.388349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:55.058 [2024-10-07 07:44:54.388601] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.058 "name": "raid_bdev1", 00:23:55.058 "uuid": "c998843e-79da-4422-a041-03282c0ab78e", 00:23:55.058 "strip_size_kb": 0, 00:23:55.058 "state": "online", 00:23:55.058 "raid_level": "raid1", 00:23:55.058 "superblock": true, 00:23:55.058 "num_base_bdevs": 4, 00:23:55.058 "num_base_bdevs_discovered": 3, 00:23:55.058 "num_base_bdevs_operational": 3, 00:23:55.058 "base_bdevs_list": [ 00:23:55.058 { 00:23:55.058 "name": null, 00:23:55.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.058 "is_configured": false, 00:23:55.058 "data_offset": 0, 00:23:55.058 "data_size": 63488 00:23:55.058 }, 00:23:55.058 { 00:23:55.058 "name": "BaseBdev2", 00:23:55.058 "uuid": "e4b7ec23-ff14-5e9f-ab25-f621b2ff3fb6", 00:23:55.058 "is_configured": true, 00:23:55.058 "data_offset": 2048, 00:23:55.058 "data_size": 63488 00:23:55.058 }, 00:23:55.058 { 00:23:55.058 "name": "BaseBdev3", 00:23:55.058 "uuid": "ac399900-c567-5977-ab71-847f1eeb1ab0", 00:23:55.058 "is_configured": true, 00:23:55.058 "data_offset": 2048, 00:23:55.058 "data_size": 63488 00:23:55.058 }, 00:23:55.058 { 00:23:55.058 "name": "BaseBdev4", 00:23:55.058 "uuid": "3ec84215-5b92-5ef5-8264-c397329a0513", 00:23:55.058 "is_configured": true, 00:23:55.058 "data_offset": 2048, 00:23:55.058 "data_size": 63488 00:23:55.058 } 00:23:55.058 ] 00:23:55.058 }' 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.058 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.318 [2024-10-07 07:44:54.817615] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.318 [2024-10-07 07:44:54.817658] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.318 [2024-10-07 07:44:54.820772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.318 [2024-10-07 07:44:54.820835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.318 [2024-10-07 07:44:54.820952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.318 [2024-10-07 07:44:54.820964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:55.318 { 00:23:55.318 "results": [ 00:23:55.318 { 00:23:55.318 "job": "raid_bdev1", 00:23:55.318 "core_mask": "0x1", 00:23:55.318 "workload": "randrw", 00:23:55.318 "percentage": 50, 00:23:55.318 "status": "finished", 00:23:55.318 "queue_depth": 1, 00:23:55.318 "io_size": 131072, 00:23:55.318 "runtime": 1.311189, 00:23:55.318 "iops": 10911.470428748258, 00:23:55.318 "mibps": 1363.9338035935323, 00:23:55.318 "io_failed": 0, 00:23:55.318 "io_timeout": 0, 00:23:55.318 "avg_latency_us": 88.72939107396645, 00:23:55.318 "min_latency_us": 24.38095238095238, 00:23:55.318 "max_latency_us": 1560.3809523809523 00:23:55.318 } 00:23:55.318 ], 00:23:55.318 "core_count": 1 00:23:55.318 } 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75353 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' -z 75353 ']' 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # kill -0 75353 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # uname 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 75353 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:23:55.318 killing process with pid 75353 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 75353' 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # kill 75353 00:23:55.318 [2024-10-07 07:44:54.859281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.318 07:44:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@977 -- # wait 75353 00:23:55.910 [2024-10-07 07:44:55.215355] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eQV71lmoVj 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:57.290 00:23:57.290 real 0m5.079s 00:23:57.290 user 0m5.978s 00:23:57.290 sys 0m0.646s 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:23:57.290 07:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 ************************************ 00:23:57.290 END TEST raid_write_error_test 00:23:57.290 ************************************ 00:23:57.290 07:44:56 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:23:57.290 07:44:56 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:23:57.290 07:44:56 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:23:57.290 07:44:56 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:23:57.290 07:44:56 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:23:57.290 07:44:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:57.290 ************************************ 00:23:57.290 START TEST raid_rebuild_test 00:23:57.290 ************************************ 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 false false true 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75497 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75497 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # '[' -z 75497 ']' 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:23:57.290 07:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.550 [2024-10-07 07:44:56.870961] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:23:57.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:57.550 Zero copy mechanism will not be used. 00:23:57.550 [2024-10-07 07:44:56.871389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:23:57.550 [2024-10-07 07:44:57.057202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.809 [2024-10-07 07:44:57.346702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.069 [2024-10-07 07:44:57.571862] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.069 [2024-10-07 07:44:57.571912] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # return 0 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.328 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 BaseBdev1_malloc 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 [2024-10-07 07:44:57.909515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:58.589 [2024-10-07 07:44:57.909756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.589 [2024-10-07 07:44:57.909791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:58.589 [2024-10-07 07:44:57.909812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.589 [2024-10-07 07:44:57.912438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.589 [2024-10-07 07:44:57.912486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:58.589 BaseBdev1 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 BaseBdev2_malloc 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 [2024-10-07 07:44:57.979332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:58.589 [2024-10-07 07:44:57.979474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.589 [2024-10-07 07:44:57.979623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:58.589 [2024-10-07 07:44:57.979651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.589 [2024-10-07 07:44:57.982236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.589 BaseBdev2 00:23:58.589 [2024-10-07 07:44:57.982398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 spare_malloc 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 spare_delay 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 [2024-10-07 07:44:58.040504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:58.589 [2024-10-07 07:44:58.040717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.589 [2024-10-07 07:44:58.040829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:58.589 [2024-10-07 07:44:58.040855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.589 [2024-10-07 07:44:58.043598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.589 [2024-10-07 07:44:58.043645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:58.589 spare 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.589 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.589 [2024-10-07 07:44:58.048561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:58.589 [2024-10-07 07:44:58.051036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:58.589 [2024-10-07 07:44:58.051272] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:58.590 [2024-10-07 07:44:58.051385] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:58.590 [2024-10-07 07:44:58.051847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:58.590 [2024-10-07 07:44:58.052031] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:58.590 [2024-10-07 07:44:58.052044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:58.590 [2024-10-07 07:44:58.052218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.590 "name": "raid_bdev1", 00:23:58.590 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:23:58.590 "strip_size_kb": 0, 00:23:58.590 "state": "online", 00:23:58.590 "raid_level": "raid1", 00:23:58.590 "superblock": false, 00:23:58.590 "num_base_bdevs": 2, 00:23:58.590 "num_base_bdevs_discovered": 2, 00:23:58.590 "num_base_bdevs_operational": 2, 00:23:58.590 "base_bdevs_list": [ 00:23:58.590 { 00:23:58.590 "name": "BaseBdev1", 00:23:58.590 "uuid": "1f0ff318-61e2-5c95-8a80-0b7866ed3e76", 00:23:58.590 "is_configured": true, 00:23:58.590 "data_offset": 0, 00:23:58.590 "data_size": 65536 00:23:58.590 }, 00:23:58.590 { 00:23:58.590 "name": "BaseBdev2", 00:23:58.590 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:23:58.590 "is_configured": true, 00:23:58.590 "data_offset": 0, 00:23:58.590 "data_size": 65536 00:23:58.590 } 00:23:58.590 ] 00:23:58.590 }' 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.590 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.178 [2024-10-07 07:44:58.472966] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:59.178 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.179 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:59.179 [2024-10-07 07:44:58.724822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:59.438 /dev/nbd0 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.438 1+0 records in 00:23:59.438 1+0 records out 00:23:59.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296472 s, 13.8 MB/s 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:59.438 07:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:04.710 65536+0 records in 00:24:04.710 65536+0 records out 00:24:04.710 33554432 bytes (34 MB, 32 MiB) copied, 5.35534 s, 6.3 MB/s 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:04.710 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:04.970 [2024-10-07 07:45:04.441060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.970 [2024-10-07 07:45:04.489179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.970 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:05.229 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.229 "name": "raid_bdev1", 00:24:05.229 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:05.229 "strip_size_kb": 0, 00:24:05.229 "state": "online", 00:24:05.229 "raid_level": "raid1", 00:24:05.229 "superblock": false, 00:24:05.229 "num_base_bdevs": 2, 00:24:05.229 "num_base_bdevs_discovered": 1, 00:24:05.229 "num_base_bdevs_operational": 1, 00:24:05.229 "base_bdevs_list": [ 00:24:05.229 { 00:24:05.229 "name": null, 00:24:05.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.229 "is_configured": false, 00:24:05.229 "data_offset": 0, 00:24:05.229 "data_size": 65536 00:24:05.229 }, 00:24:05.229 { 00:24:05.229 "name": "BaseBdev2", 00:24:05.229 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:05.229 "is_configured": true, 00:24:05.229 "data_offset": 0, 00:24:05.229 "data_size": 65536 00:24:05.229 } 00:24:05.229 ] 00:24:05.229 }' 00:24:05.229 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.229 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.487 07:45:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:05.487 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:05.487 07:45:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.487 [2024-10-07 07:45:04.997330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.487 [2024-10-07 07:45:05.014987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:24:05.487 07:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:05.487 07:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:05.487 [2024-10-07 07:45:05.017336] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.865 "name": "raid_bdev1", 00:24:06.865 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:06.865 "strip_size_kb": 0, 00:24:06.865 "state": "online", 00:24:06.865 "raid_level": "raid1", 00:24:06.865 "superblock": false, 00:24:06.865 "num_base_bdevs": 2, 00:24:06.865 "num_base_bdevs_discovered": 2, 00:24:06.865 "num_base_bdevs_operational": 2, 00:24:06.865 "process": { 00:24:06.865 "type": "rebuild", 00:24:06.865 "target": "spare", 00:24:06.865 "progress": { 00:24:06.865 "blocks": 20480, 00:24:06.865 "percent": 31 00:24:06.865 } 00:24:06.865 }, 00:24:06.865 "base_bdevs_list": [ 00:24:06.865 { 00:24:06.865 "name": "spare", 00:24:06.865 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:06.865 "is_configured": true, 00:24:06.865 "data_offset": 0, 00:24:06.865 "data_size": 65536 00:24:06.865 }, 00:24:06.865 { 00:24:06.865 "name": "BaseBdev2", 00:24:06.865 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:06.865 "is_configured": true, 00:24:06.865 "data_offset": 0, 00:24:06.865 "data_size": 65536 00:24:06.865 } 00:24:06.865 ] 00:24:06.865 }' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.865 [2024-10-07 07:45:06.139092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:06.865 [2024-10-07 07:45:06.225575] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:06.865 [2024-10-07 07:45:06.225925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.865 [2024-10-07 07:45:06.225950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:06.865 [2024-10-07 07:45:06.225966] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.865 "name": "raid_bdev1", 00:24:06.865 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:06.865 "strip_size_kb": 0, 00:24:06.865 "state": "online", 00:24:06.865 "raid_level": "raid1", 00:24:06.865 "superblock": false, 00:24:06.865 "num_base_bdevs": 2, 00:24:06.865 "num_base_bdevs_discovered": 1, 00:24:06.865 "num_base_bdevs_operational": 1, 00:24:06.865 "base_bdevs_list": [ 00:24:06.865 { 00:24:06.865 "name": null, 00:24:06.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.865 "is_configured": false, 00:24:06.865 "data_offset": 0, 00:24:06.865 "data_size": 65536 00:24:06.865 }, 00:24:06.865 { 00:24:06.865 "name": "BaseBdev2", 00:24:06.865 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:06.865 "is_configured": true, 00:24:06.865 "data_offset": 0, 00:24:06.865 "data_size": 65536 00:24:06.865 } 00:24:06.865 ] 00:24:06.865 }' 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.865 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.124 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:07.124 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.124 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:07.124 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:07.124 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.383 "name": "raid_bdev1", 00:24:07.383 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:07.383 "strip_size_kb": 0, 00:24:07.383 "state": "online", 00:24:07.383 "raid_level": "raid1", 00:24:07.383 "superblock": false, 00:24:07.383 "num_base_bdevs": 2, 00:24:07.383 "num_base_bdevs_discovered": 1, 00:24:07.383 "num_base_bdevs_operational": 1, 00:24:07.383 "base_bdevs_list": [ 00:24:07.383 { 00:24:07.383 "name": null, 00:24:07.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.383 "is_configured": false, 00:24:07.383 "data_offset": 0, 00:24:07.383 "data_size": 65536 00:24:07.383 }, 00:24:07.383 { 00:24:07.383 "name": "BaseBdev2", 00:24:07.383 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:07.383 "is_configured": true, 00:24:07.383 "data_offset": 0, 00:24:07.383 "data_size": 65536 00:24:07.383 } 00:24:07.383 ] 00:24:07.383 }' 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.383 [2024-10-07 07:45:06.814583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.383 [2024-10-07 07:45:06.830178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:24:07.383 07:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:07.384 07:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:07.384 [2024-10-07 07:45:06.832517] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:08.323 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.323 "name": "raid_bdev1", 00:24:08.323 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:08.323 "strip_size_kb": 0, 00:24:08.323 "state": "online", 00:24:08.323 "raid_level": "raid1", 00:24:08.323 "superblock": false, 00:24:08.323 "num_base_bdevs": 2, 00:24:08.323 "num_base_bdevs_discovered": 2, 00:24:08.323 "num_base_bdevs_operational": 2, 00:24:08.323 "process": { 00:24:08.323 "type": "rebuild", 00:24:08.323 "target": "spare", 00:24:08.323 "progress": { 00:24:08.323 "blocks": 20480, 00:24:08.323 "percent": 31 00:24:08.323 } 00:24:08.323 }, 00:24:08.323 "base_bdevs_list": [ 00:24:08.323 { 00:24:08.323 "name": "spare", 00:24:08.323 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:08.323 "is_configured": true, 00:24:08.323 "data_offset": 0, 00:24:08.323 "data_size": 65536 00:24:08.323 }, 00:24:08.323 { 00:24:08.323 "name": "BaseBdev2", 00:24:08.323 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:08.323 "is_configured": true, 00:24:08.323 "data_offset": 0, 00:24:08.323 "data_size": 65536 00:24:08.323 } 00:24:08.323 ] 00:24:08.323 }' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.584 07:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.584 "name": "raid_bdev1", 00:24:08.584 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:08.584 "strip_size_kb": 0, 00:24:08.584 "state": "online", 00:24:08.584 "raid_level": "raid1", 00:24:08.584 "superblock": false, 00:24:08.584 "num_base_bdevs": 2, 00:24:08.584 "num_base_bdevs_discovered": 2, 00:24:08.584 "num_base_bdevs_operational": 2, 00:24:08.584 "process": { 00:24:08.584 "type": "rebuild", 00:24:08.584 "target": "spare", 00:24:08.584 "progress": { 00:24:08.584 "blocks": 22528, 00:24:08.584 "percent": 34 00:24:08.584 } 00:24:08.584 }, 00:24:08.584 "base_bdevs_list": [ 00:24:08.584 { 00:24:08.584 "name": "spare", 00:24:08.584 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:08.584 "is_configured": true, 00:24:08.584 "data_offset": 0, 00:24:08.584 "data_size": 65536 00:24:08.584 }, 00:24:08.584 { 00:24:08.584 "name": "BaseBdev2", 00:24:08.584 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:08.584 "is_configured": true, 00:24:08.584 "data_offset": 0, 00:24:08.584 "data_size": 65536 00:24:08.584 } 00:24:08.584 ] 00:24:08.584 }' 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.584 07:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.963 "name": "raid_bdev1", 00:24:09.963 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:09.963 "strip_size_kb": 0, 00:24:09.963 "state": "online", 00:24:09.963 "raid_level": "raid1", 00:24:09.963 "superblock": false, 00:24:09.963 "num_base_bdevs": 2, 00:24:09.963 "num_base_bdevs_discovered": 2, 00:24:09.963 "num_base_bdevs_operational": 2, 00:24:09.963 "process": { 00:24:09.963 "type": "rebuild", 00:24:09.963 "target": "spare", 00:24:09.963 "progress": { 00:24:09.963 "blocks": 45056, 00:24:09.963 "percent": 68 00:24:09.963 } 00:24:09.963 }, 00:24:09.963 "base_bdevs_list": [ 00:24:09.963 { 00:24:09.963 "name": "spare", 00:24:09.963 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:09.963 "is_configured": true, 00:24:09.963 "data_offset": 0, 00:24:09.963 "data_size": 65536 00:24:09.963 }, 00:24:09.963 { 00:24:09.963 "name": "BaseBdev2", 00:24:09.963 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:09.963 "is_configured": true, 00:24:09.963 "data_offset": 0, 00:24:09.963 "data_size": 65536 00:24:09.963 } 00:24:09.963 ] 00:24:09.963 }' 00:24:09.963 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.964 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.964 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.964 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.964 07:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.531 [2024-10-07 07:45:10.053166] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:10.531 [2024-10-07 07:45:10.053498] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:10.531 [2024-10-07 07:45:10.053576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.790 "name": "raid_bdev1", 00:24:10.790 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:10.790 "strip_size_kb": 0, 00:24:10.790 "state": "online", 00:24:10.790 "raid_level": "raid1", 00:24:10.790 "superblock": false, 00:24:10.790 "num_base_bdevs": 2, 00:24:10.790 "num_base_bdevs_discovered": 2, 00:24:10.790 "num_base_bdevs_operational": 2, 00:24:10.790 "base_bdevs_list": [ 00:24:10.790 { 00:24:10.790 "name": "spare", 00:24:10.790 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:10.790 "is_configured": true, 00:24:10.790 "data_offset": 0, 00:24:10.790 "data_size": 65536 00:24:10.790 }, 00:24:10.790 { 00:24:10.790 "name": "BaseBdev2", 00:24:10.790 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:10.790 "is_configured": true, 00:24:10.790 "data_offset": 0, 00:24:10.790 "data_size": 65536 00:24:10.790 } 00:24:10.790 ] 00:24:10.790 }' 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:10.790 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.049 "name": "raid_bdev1", 00:24:11.049 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:11.049 "strip_size_kb": 0, 00:24:11.049 "state": "online", 00:24:11.049 "raid_level": "raid1", 00:24:11.049 "superblock": false, 00:24:11.049 "num_base_bdevs": 2, 00:24:11.049 "num_base_bdevs_discovered": 2, 00:24:11.049 "num_base_bdevs_operational": 2, 00:24:11.049 "base_bdevs_list": [ 00:24:11.049 { 00:24:11.049 "name": "spare", 00:24:11.049 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:11.049 "is_configured": true, 00:24:11.049 "data_offset": 0, 00:24:11.049 "data_size": 65536 00:24:11.049 }, 00:24:11.049 { 00:24:11.049 "name": "BaseBdev2", 00:24:11.049 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:11.049 "is_configured": true, 00:24:11.049 "data_offset": 0, 00:24:11.049 "data_size": 65536 00:24:11.049 } 00:24:11.049 ] 00:24:11.049 }' 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:11.049 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.050 "name": "raid_bdev1", 00:24:11.050 "uuid": "011df302-b1bd-4c12-9daf-434add3542ad", 00:24:11.050 "strip_size_kb": 0, 00:24:11.050 "state": "online", 00:24:11.050 "raid_level": "raid1", 00:24:11.050 "superblock": false, 00:24:11.050 "num_base_bdevs": 2, 00:24:11.050 "num_base_bdevs_discovered": 2, 00:24:11.050 "num_base_bdevs_operational": 2, 00:24:11.050 "base_bdevs_list": [ 00:24:11.050 { 00:24:11.050 "name": "spare", 00:24:11.050 "uuid": "f5b9d321-6aef-543a-a092-3f2daaa01a6c", 00:24:11.050 "is_configured": true, 00:24:11.050 "data_offset": 0, 00:24:11.050 "data_size": 65536 00:24:11.050 }, 00:24:11.050 { 00:24:11.050 "name": "BaseBdev2", 00:24:11.050 "uuid": "cc583bea-3241-5126-99a1-133260b35700", 00:24:11.050 "is_configured": true, 00:24:11.050 "data_offset": 0, 00:24:11.050 "data_size": 65536 00:24:11.050 } 00:24:11.050 ] 00:24:11.050 }' 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.050 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.619 [2024-10-07 07:45:10.923661] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.619 [2024-10-07 07:45:10.923834] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.619 [2024-10-07 07:45:10.924031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.619 [2024-10-07 07:45:10.924215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.619 [2024-10-07 07:45:10.924340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:11.619 07:45:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:11.878 /dev/nbd0 00:24:11.878 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:11.878 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:11.878 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:24:11.878 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:24:11.878 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:11.879 1+0 records in 00:24:11.879 1+0 records out 00:24:11.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290521 s, 14.1 MB/s 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:11.879 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:12.138 /dev/nbd1 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.138 1+0 records in 00:24:12.138 1+0 records out 00:24:12.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315772 s, 13.0 MB/s 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.138 07:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.397 07:45:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.657 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75497 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' -z 75497 ']' 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # kill -0 75497 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # uname 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 75497 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:24:12.916 killing process with pid 75497 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 75497' 00:24:12.916 Received shutdown signal, test time was about 60.000000 seconds 00:24:12.916 00:24:12.916 Latency(us) 00:24:12.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.916 =================================================================================================================== 00:24:12.916 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # kill 75497 00:24:12.916 [2024-10-07 07:45:12.425900] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:12.916 07:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@977 -- # wait 75497 00:24:13.485 [2024-10-07 07:45:12.757268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:14.864 00:24:14.864 real 0m17.368s 00:24:14.864 user 0m19.083s 00:24:14.864 sys 0m3.753s 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.864 ************************************ 00:24:14.864 END TEST raid_rebuild_test 00:24:14.864 ************************************ 00:24:14.864 07:45:14 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:24:14.864 07:45:14 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:24:14.864 07:45:14 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:14.864 07:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:14.864 ************************************ 00:24:14.864 START TEST raid_rebuild_test_sb 00:24:14.864 ************************************ 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 true false true 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75937 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75937 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # '[' -z 75937 ']' 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:14.864 07:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.864 [2024-10-07 07:45:14.303460] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:14.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:14.864 Zero copy mechanism will not be used. 00:24:14.864 [2024-10-07 07:45:14.303648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75937 ] 00:24:15.124 [2024-10-07 07:45:14.487610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.383 [2024-10-07 07:45:14.695190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.383 [2024-10-07 07:45:14.917269] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:15.383 [2024-10-07 07:45:14.917317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # return 0 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.643 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.903 BaseBdev1_malloc 00:24:15.903 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.903 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:15.903 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.903 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.903 [2024-10-07 07:45:15.254244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:15.903 [2024-10-07 07:45:15.254331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.903 [2024-10-07 07:45:15.254356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:15.903 [2024-10-07 07:45:15.254375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.903 [2024-10-07 07:45:15.256948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.903 [2024-10-07 07:45:15.256995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:15.903 BaseBdev1 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 BaseBdev2_malloc 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 [2024-10-07 07:45:15.324287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:15.904 [2024-10-07 07:45:15.324545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.904 [2024-10-07 07:45:15.324614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:15.904 [2024-10-07 07:45:15.324718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.904 [2024-10-07 07:45:15.327567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.904 [2024-10-07 07:45:15.327738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:15.904 BaseBdev2 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 spare_malloc 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 spare_delay 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 [2024-10-07 07:45:15.390124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:15.904 [2024-10-07 07:45:15.390352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.904 [2024-10-07 07:45:15.390399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:15.904 [2024-10-07 07:45:15.390417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.904 [2024-10-07 07:45:15.393214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.904 [2024-10-07 07:45:15.393266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:15.904 spare 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 [2024-10-07 07:45:15.398259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.904 [2024-10-07 07:45:15.400777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.904 [2024-10-07 07:45:15.401129] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:15.904 [2024-10-07 07:45:15.401275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:15.904 [2024-10-07 07:45:15.401763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:15.904 [2024-10-07 07:45:15.402107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:15.904 [2024-10-07 07:45:15.402237] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:15.904 [2024-10-07 07:45:15.402667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.904 "name": "raid_bdev1", 00:24:15.904 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:15.904 "strip_size_kb": 0, 00:24:15.904 "state": "online", 00:24:15.904 "raid_level": "raid1", 00:24:15.904 "superblock": true, 00:24:15.904 "num_base_bdevs": 2, 00:24:15.904 "num_base_bdevs_discovered": 2, 00:24:15.904 "num_base_bdevs_operational": 2, 00:24:15.904 "base_bdevs_list": [ 00:24:15.904 { 00:24:15.904 "name": "BaseBdev1", 00:24:15.904 "uuid": "df801411-d21a-50be-b23f-522c7aa81255", 00:24:15.904 "is_configured": true, 00:24:15.904 "data_offset": 2048, 00:24:15.904 "data_size": 63488 00:24:15.904 }, 00:24:15.904 { 00:24:15.904 "name": "BaseBdev2", 00:24:15.904 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:15.904 "is_configured": true, 00:24:15.904 "data_offset": 2048, 00:24:15.904 "data_size": 63488 00:24:15.904 } 00:24:15.904 ] 00:24:15.904 }' 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.904 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.473 [2024-10-07 07:45:15.807052] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:16.473 07:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:16.733 [2024-10-07 07:45:16.082835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:16.733 /dev/nbd0 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:16.733 1+0 records in 00:24:16.733 1+0 records out 00:24:16.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319178 s, 12.8 MB/s 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:16.733 07:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:23.304 63488+0 records in 00:24:23.304 63488+0 records out 00:24:23.304 32505856 bytes (33 MB, 31 MiB) copied, 5.47324 s, 5.9 MB/s 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:23.304 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.304 [2024-10-07 07:45:21.895692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.305 [2024-10-07 07:45:21.907829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.305 "name": "raid_bdev1", 00:24:23.305 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:23.305 "strip_size_kb": 0, 00:24:23.305 "state": "online", 00:24:23.305 "raid_level": "raid1", 00:24:23.305 "superblock": true, 00:24:23.305 "num_base_bdevs": 2, 00:24:23.305 "num_base_bdevs_discovered": 1, 00:24:23.305 "num_base_bdevs_operational": 1, 00:24:23.305 "base_bdevs_list": [ 00:24:23.305 { 00:24:23.305 "name": null, 00:24:23.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.305 "is_configured": false, 00:24:23.305 "data_offset": 0, 00:24:23.305 "data_size": 63488 00:24:23.305 }, 00:24:23.305 { 00:24:23.305 "name": "BaseBdev2", 00:24:23.305 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:23.305 "is_configured": true, 00:24:23.305 "data_offset": 2048, 00:24:23.305 "data_size": 63488 00:24:23.305 } 00:24:23.305 ] 00:24:23.305 }' 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.305 07:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.305 07:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.305 07:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:23.305 07:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.305 [2024-10-07 07:45:22.272008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.305 [2024-10-07 07:45:22.289362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:24:23.305 07:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:23.305 07:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:23.305 [2024-10-07 07:45:22.291768] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:23.871 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.872 "name": "raid_bdev1", 00:24:23.872 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:23.872 "strip_size_kb": 0, 00:24:23.872 "state": "online", 00:24:23.872 "raid_level": "raid1", 00:24:23.872 "superblock": true, 00:24:23.872 "num_base_bdevs": 2, 00:24:23.872 "num_base_bdevs_discovered": 2, 00:24:23.872 "num_base_bdevs_operational": 2, 00:24:23.872 "process": { 00:24:23.872 "type": "rebuild", 00:24:23.872 "target": "spare", 00:24:23.872 "progress": { 00:24:23.872 "blocks": 20480, 00:24:23.872 "percent": 32 00:24:23.872 } 00:24:23.872 }, 00:24:23.872 "base_bdevs_list": [ 00:24:23.872 { 00:24:23.872 "name": "spare", 00:24:23.872 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:23.872 "is_configured": true, 00:24:23.872 "data_offset": 2048, 00:24:23.872 "data_size": 63488 00:24:23.872 }, 00:24:23.872 { 00:24:23.872 "name": "BaseBdev2", 00:24:23.872 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:23.872 "is_configured": true, 00:24:23.872 "data_offset": 2048, 00:24:23.872 "data_size": 63488 00:24:23.872 } 00:24:23.872 ] 00:24:23.872 }' 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.872 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.130 [2024-10-07 07:45:23.480720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.130 [2024-10-07 07:45:23.499670] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:24.130 [2024-10-07 07:45:23.499921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.130 [2024-10-07 07:45:23.499951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.130 [2024-10-07 07:45:23.499967] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.130 "name": "raid_bdev1", 00:24:24.130 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:24.130 "strip_size_kb": 0, 00:24:24.130 "state": "online", 00:24:24.130 "raid_level": "raid1", 00:24:24.130 "superblock": true, 00:24:24.130 "num_base_bdevs": 2, 00:24:24.130 "num_base_bdevs_discovered": 1, 00:24:24.130 "num_base_bdevs_operational": 1, 00:24:24.130 "base_bdevs_list": [ 00:24:24.130 { 00:24:24.130 "name": null, 00:24:24.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.130 "is_configured": false, 00:24:24.130 "data_offset": 0, 00:24:24.130 "data_size": 63488 00:24:24.130 }, 00:24:24.130 { 00:24:24.130 "name": "BaseBdev2", 00:24:24.130 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:24.130 "is_configured": true, 00:24:24.130 "data_offset": 2048, 00:24:24.130 "data_size": 63488 00:24:24.130 } 00:24:24.130 ] 00:24:24.130 }' 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.130 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.388 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.645 07:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:24.645 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.646 "name": "raid_bdev1", 00:24:24.646 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:24.646 "strip_size_kb": 0, 00:24:24.646 "state": "online", 00:24:24.646 "raid_level": "raid1", 00:24:24.646 "superblock": true, 00:24:24.646 "num_base_bdevs": 2, 00:24:24.646 "num_base_bdevs_discovered": 1, 00:24:24.646 "num_base_bdevs_operational": 1, 00:24:24.646 "base_bdevs_list": [ 00:24:24.646 { 00:24:24.646 "name": null, 00:24:24.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.646 "is_configured": false, 00:24:24.646 "data_offset": 0, 00:24:24.646 "data_size": 63488 00:24:24.646 }, 00:24:24.646 { 00:24:24.646 "name": "BaseBdev2", 00:24:24.646 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:24.646 "is_configured": true, 00:24:24.646 "data_offset": 2048, 00:24:24.646 "data_size": 63488 00:24:24.646 } 00:24:24.646 ] 00:24:24.646 }' 00:24:24.646 07:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.646 [2024-10-07 07:45:24.075063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.646 [2024-10-07 07:45:24.091902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:24.646 07:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:24.646 [2024-10-07 07:45:24.094368] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:25.581 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.581 "name": "raid_bdev1", 00:24:25.581 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:25.581 "strip_size_kb": 0, 00:24:25.581 "state": "online", 00:24:25.581 "raid_level": "raid1", 00:24:25.581 "superblock": true, 00:24:25.581 "num_base_bdevs": 2, 00:24:25.581 "num_base_bdevs_discovered": 2, 00:24:25.581 "num_base_bdevs_operational": 2, 00:24:25.581 "process": { 00:24:25.581 "type": "rebuild", 00:24:25.581 "target": "spare", 00:24:25.581 "progress": { 00:24:25.581 "blocks": 20480, 00:24:25.581 "percent": 32 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 "base_bdevs_list": [ 00:24:25.581 { 00:24:25.581 "name": "spare", 00:24:25.581 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:25.581 "is_configured": true, 00:24:25.581 "data_offset": 2048, 00:24:25.581 "data_size": 63488 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "name": "BaseBdev2", 00:24:25.581 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:25.581 "is_configured": true, 00:24:25.581 "data_offset": 2048, 00:24:25.581 "data_size": 63488 00:24:25.581 } 00:24:25.581 ] 00:24:25.581 }' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:25.840 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.840 "name": "raid_bdev1", 00:24:25.840 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:25.840 "strip_size_kb": 0, 00:24:25.840 "state": "online", 00:24:25.840 "raid_level": "raid1", 00:24:25.840 "superblock": true, 00:24:25.840 "num_base_bdevs": 2, 00:24:25.840 "num_base_bdevs_discovered": 2, 00:24:25.840 "num_base_bdevs_operational": 2, 00:24:25.840 "process": { 00:24:25.840 "type": "rebuild", 00:24:25.840 "target": "spare", 00:24:25.840 "progress": { 00:24:25.840 "blocks": 22528, 00:24:25.840 "percent": 35 00:24:25.840 } 00:24:25.840 }, 00:24:25.840 "base_bdevs_list": [ 00:24:25.840 { 00:24:25.840 "name": "spare", 00:24:25.840 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:25.840 "is_configured": true, 00:24:25.840 "data_offset": 2048, 00:24:25.840 "data_size": 63488 00:24:25.840 }, 00:24:25.840 { 00:24:25.840 "name": "BaseBdev2", 00:24:25.840 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:25.840 "is_configured": true, 00:24:25.840 "data_offset": 2048, 00:24:25.840 "data_size": 63488 00:24:25.840 } 00:24:25.840 ] 00:24:25.840 }' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.840 07:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:27.220 "name": "raid_bdev1", 00:24:27.220 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:27.220 "strip_size_kb": 0, 00:24:27.220 "state": "online", 00:24:27.220 "raid_level": "raid1", 00:24:27.220 "superblock": true, 00:24:27.220 "num_base_bdevs": 2, 00:24:27.220 "num_base_bdevs_discovered": 2, 00:24:27.220 "num_base_bdevs_operational": 2, 00:24:27.220 "process": { 00:24:27.220 "type": "rebuild", 00:24:27.220 "target": "spare", 00:24:27.220 "progress": { 00:24:27.220 "blocks": 45056, 00:24:27.220 "percent": 70 00:24:27.220 } 00:24:27.220 }, 00:24:27.220 "base_bdevs_list": [ 00:24:27.220 { 00:24:27.220 "name": "spare", 00:24:27.220 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:27.220 "is_configured": true, 00:24:27.220 "data_offset": 2048, 00:24:27.220 "data_size": 63488 00:24:27.220 }, 00:24:27.220 { 00:24:27.220 "name": "BaseBdev2", 00:24:27.220 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:27.220 "is_configured": true, 00:24:27.220 "data_offset": 2048, 00:24:27.220 "data_size": 63488 00:24:27.220 } 00:24:27.220 ] 00:24:27.220 }' 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.220 07:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:27.789 [2024-10-07 07:45:27.214126] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:27.789 [2024-10-07 07:45:27.214218] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:27.789 [2024-10-07 07:45:27.214373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:28.048 "name": "raid_bdev1", 00:24:28.048 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:28.048 "strip_size_kb": 0, 00:24:28.048 "state": "online", 00:24:28.048 "raid_level": "raid1", 00:24:28.048 "superblock": true, 00:24:28.048 "num_base_bdevs": 2, 00:24:28.048 "num_base_bdevs_discovered": 2, 00:24:28.048 "num_base_bdevs_operational": 2, 00:24:28.048 "base_bdevs_list": [ 00:24:28.048 { 00:24:28.048 "name": "spare", 00:24:28.048 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:28.048 "is_configured": true, 00:24:28.048 "data_offset": 2048, 00:24:28.048 "data_size": 63488 00:24:28.048 }, 00:24:28.048 { 00:24:28.048 "name": "BaseBdev2", 00:24:28.048 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:28.048 "is_configured": true, 00:24:28.048 "data_offset": 2048, 00:24:28.048 "data_size": 63488 00:24:28.048 } 00:24:28.048 ] 00:24:28.048 }' 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:28.048 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:28.308 "name": "raid_bdev1", 00:24:28.308 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:28.308 "strip_size_kb": 0, 00:24:28.308 "state": "online", 00:24:28.308 "raid_level": "raid1", 00:24:28.308 "superblock": true, 00:24:28.308 "num_base_bdevs": 2, 00:24:28.308 "num_base_bdevs_discovered": 2, 00:24:28.308 "num_base_bdevs_operational": 2, 00:24:28.308 "base_bdevs_list": [ 00:24:28.308 { 00:24:28.308 "name": "spare", 00:24:28.308 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:28.308 "is_configured": true, 00:24:28.308 "data_offset": 2048, 00:24:28.308 "data_size": 63488 00:24:28.308 }, 00:24:28.308 { 00:24:28.308 "name": "BaseBdev2", 00:24:28.308 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:28.308 "is_configured": true, 00:24:28.308 "data_offset": 2048, 00:24:28.308 "data_size": 63488 00:24:28.308 } 00:24:28.308 ] 00:24:28.308 }' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.308 "name": "raid_bdev1", 00:24:28.308 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:28.308 "strip_size_kb": 0, 00:24:28.308 "state": "online", 00:24:28.308 "raid_level": "raid1", 00:24:28.308 "superblock": true, 00:24:28.308 "num_base_bdevs": 2, 00:24:28.308 "num_base_bdevs_discovered": 2, 00:24:28.308 "num_base_bdevs_operational": 2, 00:24:28.308 "base_bdevs_list": [ 00:24:28.308 { 00:24:28.308 "name": "spare", 00:24:28.308 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:28.308 "is_configured": true, 00:24:28.308 "data_offset": 2048, 00:24:28.308 "data_size": 63488 00:24:28.308 }, 00:24:28.308 { 00:24:28.308 "name": "BaseBdev2", 00:24:28.308 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:28.308 "is_configured": true, 00:24:28.308 "data_offset": 2048, 00:24:28.308 "data_size": 63488 00:24:28.308 } 00:24:28.308 ] 00:24:28.308 }' 00:24:28.308 07:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.309 07:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.878 [2024-10-07 07:45:28.217997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:28.878 [2024-10-07 07:45:28.218030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.878 [2024-10-07 07:45:28.218118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.878 [2024-10-07 07:45:28.218194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.878 [2024-10-07 07:45:28.218208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.878 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:29.137 /dev/nbd0 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.137 1+0 records in 00:24:29.137 1+0 records out 00:24:29.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321592 s, 12.7 MB/s 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.137 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:29.396 /dev/nbd1 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.396 1+0 records in 00:24:29.396 1+0 records out 00:24:29.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331377 s, 12.4 MB/s 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.396 07:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.655 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.913 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.172 [2024-10-07 07:45:29.668986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:30.172 [2024-10-07 07:45:29.669198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.172 [2024-10-07 07:45:29.669243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:30.172 [2024-10-07 07:45:29.669263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.172 [2024-10-07 07:45:29.672101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.172 [2024-10-07 07:45:29.672142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:30.172 [2024-10-07 07:45:29.672243] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:30.172 [2024-10-07 07:45:29.672292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:30.172 [2024-10-07 07:45:29.672468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.172 spare 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.172 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.431 [2024-10-07 07:45:29.772623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:30.431 [2024-10-07 07:45:29.772871] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:30.431 [2024-10-07 07:45:29.773366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:24:30.431 [2024-10-07 07:45:29.773764] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:30.431 [2024-10-07 07:45:29.773787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:30.431 [2024-10-07 07:45:29.774032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.431 "name": "raid_bdev1", 00:24:30.431 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:30.431 "strip_size_kb": 0, 00:24:30.431 "state": "online", 00:24:30.431 "raid_level": "raid1", 00:24:30.431 "superblock": true, 00:24:30.431 "num_base_bdevs": 2, 00:24:30.431 "num_base_bdevs_discovered": 2, 00:24:30.431 "num_base_bdevs_operational": 2, 00:24:30.431 "base_bdevs_list": [ 00:24:30.431 { 00:24:30.431 "name": "spare", 00:24:30.431 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:30.431 "is_configured": true, 00:24:30.431 "data_offset": 2048, 00:24:30.431 "data_size": 63488 00:24:30.431 }, 00:24:30.431 { 00:24:30.431 "name": "BaseBdev2", 00:24:30.431 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:30.431 "is_configured": true, 00:24:30.431 "data_offset": 2048, 00:24:30.431 "data_size": 63488 00:24:30.431 } 00:24:30.431 ] 00:24:30.431 }' 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.431 07:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:30.690 "name": "raid_bdev1", 00:24:30.690 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:30.690 "strip_size_kb": 0, 00:24:30.690 "state": "online", 00:24:30.690 "raid_level": "raid1", 00:24:30.690 "superblock": true, 00:24:30.690 "num_base_bdevs": 2, 00:24:30.690 "num_base_bdevs_discovered": 2, 00:24:30.690 "num_base_bdevs_operational": 2, 00:24:30.690 "base_bdevs_list": [ 00:24:30.690 { 00:24:30.690 "name": "spare", 00:24:30.690 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:30.690 "is_configured": true, 00:24:30.690 "data_offset": 2048, 00:24:30.690 "data_size": 63488 00:24:30.690 }, 00:24:30.690 { 00:24:30.690 "name": "BaseBdev2", 00:24:30.690 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:30.690 "is_configured": true, 00:24:30.690 "data_offset": 2048, 00:24:30.690 "data_size": 63488 00:24:30.690 } 00:24:30.690 ] 00:24:30.690 }' 00:24:30.690 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.957 [2024-10-07 07:45:30.358134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.957 "name": "raid_bdev1", 00:24:30.957 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:30.957 "strip_size_kb": 0, 00:24:30.957 "state": "online", 00:24:30.957 "raid_level": "raid1", 00:24:30.957 "superblock": true, 00:24:30.957 "num_base_bdevs": 2, 00:24:30.957 "num_base_bdevs_discovered": 1, 00:24:30.957 "num_base_bdevs_operational": 1, 00:24:30.957 "base_bdevs_list": [ 00:24:30.957 { 00:24:30.957 "name": null, 00:24:30.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.957 "is_configured": false, 00:24:30.957 "data_offset": 0, 00:24:30.957 "data_size": 63488 00:24:30.957 }, 00:24:30.957 { 00:24:30.957 "name": "BaseBdev2", 00:24:30.957 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:30.957 "is_configured": true, 00:24:30.957 "data_offset": 2048, 00:24:30.957 "data_size": 63488 00:24:30.957 } 00:24:30.957 ] 00:24:30.957 }' 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.957 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:31.233 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:31.233 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 [2024-10-07 07:45:30.746218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.233 [2024-10-07 07:45:30.746576] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:31.233 [2024-10-07 07:45:30.746616] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:31.233 [2024-10-07 07:45:30.746657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.233 [2024-10-07 07:45:30.763805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:24:31.233 07:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:31.233 07:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:31.233 [2024-10-07 07:45:30.766267] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:32.609 "name": "raid_bdev1", 00:24:32.609 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:32.609 "strip_size_kb": 0, 00:24:32.609 "state": "online", 00:24:32.609 "raid_level": "raid1", 00:24:32.609 "superblock": true, 00:24:32.609 "num_base_bdevs": 2, 00:24:32.609 "num_base_bdevs_discovered": 2, 00:24:32.609 "num_base_bdevs_operational": 2, 00:24:32.609 "process": { 00:24:32.609 "type": "rebuild", 00:24:32.609 "target": "spare", 00:24:32.609 "progress": { 00:24:32.609 "blocks": 20480, 00:24:32.609 "percent": 32 00:24:32.609 } 00:24:32.609 }, 00:24:32.609 "base_bdevs_list": [ 00:24:32.609 { 00:24:32.609 "name": "spare", 00:24:32.609 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:32.609 "is_configured": true, 00:24:32.609 "data_offset": 2048, 00:24:32.609 "data_size": 63488 00:24:32.609 }, 00:24:32.609 { 00:24:32.609 "name": "BaseBdev2", 00:24:32.609 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:32.609 "is_configured": true, 00:24:32.609 "data_offset": 2048, 00:24:32.609 "data_size": 63488 00:24:32.609 } 00:24:32.609 ] 00:24:32.609 }' 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:32.609 07:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.609 [2024-10-07 07:45:31.919182] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.609 [2024-10-07 07:45:31.973945] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:32.609 [2024-10-07 07:45:31.974243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.609 [2024-10-07 07:45:31.974270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.609 [2024-10-07 07:45:31.974285] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.609 "name": "raid_bdev1", 00:24:32.609 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:32.609 "strip_size_kb": 0, 00:24:32.609 "state": "online", 00:24:32.609 "raid_level": "raid1", 00:24:32.609 "superblock": true, 00:24:32.609 "num_base_bdevs": 2, 00:24:32.609 "num_base_bdevs_discovered": 1, 00:24:32.609 "num_base_bdevs_operational": 1, 00:24:32.609 "base_bdevs_list": [ 00:24:32.609 { 00:24:32.609 "name": null, 00:24:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.609 "is_configured": false, 00:24:32.609 "data_offset": 0, 00:24:32.609 "data_size": 63488 00:24:32.609 }, 00:24:32.609 { 00:24:32.609 "name": "BaseBdev2", 00:24:32.609 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:32.609 "is_configured": true, 00:24:32.609 "data_offset": 2048, 00:24:32.609 "data_size": 63488 00:24:32.609 } 00:24:32.609 ] 00:24:32.609 }' 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.609 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.868 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:32.868 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:32.868 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.868 [2024-10-07 07:45:32.418511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:32.868 [2024-10-07 07:45:32.419753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.868 [2024-10-07 07:45:32.419790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:32.868 [2024-10-07 07:45:32.419806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.868 [2024-10-07 07:45:32.420343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.868 [2024-10-07 07:45:32.420370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:32.868 [2024-10-07 07:45:32.420468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:32.868 [2024-10-07 07:45:32.420486] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:32.868 [2024-10-07 07:45:32.420499] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:32.868 [2024-10-07 07:45:32.420536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:33.127 [2024-10-07 07:45:32.436794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:24:33.127 spare 00:24:33.127 07:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:33.127 07:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:33.127 [2024-10-07 07:45:32.439196] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.062 "name": "raid_bdev1", 00:24:34.062 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:34.062 "strip_size_kb": 0, 00:24:34.062 "state": "online", 00:24:34.062 "raid_level": "raid1", 00:24:34.062 "superblock": true, 00:24:34.062 "num_base_bdevs": 2, 00:24:34.062 "num_base_bdevs_discovered": 2, 00:24:34.062 "num_base_bdevs_operational": 2, 00:24:34.062 "process": { 00:24:34.062 "type": "rebuild", 00:24:34.062 "target": "spare", 00:24:34.062 "progress": { 00:24:34.062 "blocks": 20480, 00:24:34.062 "percent": 32 00:24:34.062 } 00:24:34.062 }, 00:24:34.062 "base_bdevs_list": [ 00:24:34.062 { 00:24:34.062 "name": "spare", 00:24:34.062 "uuid": "8982e87e-a731-5179-b36d-f538bf0be4b8", 00:24:34.062 "is_configured": true, 00:24:34.062 "data_offset": 2048, 00:24:34.062 "data_size": 63488 00:24:34.062 }, 00:24:34.062 { 00:24:34.062 "name": "BaseBdev2", 00:24:34.062 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:34.062 "is_configured": true, 00:24:34.062 "data_offset": 2048, 00:24:34.062 "data_size": 63488 00:24:34.062 } 00:24:34.062 ] 00:24:34.062 }' 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.062 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.062 [2024-10-07 07:45:33.616103] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:34.321 [2024-10-07 07:45:33.647221] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:34.321 [2024-10-07 07:45:33.647459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.321 [2024-10-07 07:45:33.647493] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:34.321 [2024-10-07 07:45:33.647508] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.321 "name": "raid_bdev1", 00:24:34.321 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:34.321 "strip_size_kb": 0, 00:24:34.321 "state": "online", 00:24:34.321 "raid_level": "raid1", 00:24:34.321 "superblock": true, 00:24:34.321 "num_base_bdevs": 2, 00:24:34.321 "num_base_bdevs_discovered": 1, 00:24:34.321 "num_base_bdevs_operational": 1, 00:24:34.321 "base_bdevs_list": [ 00:24:34.321 { 00:24:34.321 "name": null, 00:24:34.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.321 "is_configured": false, 00:24:34.321 "data_offset": 0, 00:24:34.321 "data_size": 63488 00:24:34.321 }, 00:24:34.321 { 00:24:34.321 "name": "BaseBdev2", 00:24:34.321 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:34.321 "is_configured": true, 00:24:34.321 "data_offset": 2048, 00:24:34.321 "data_size": 63488 00:24:34.321 } 00:24:34.321 ] 00:24:34.321 }' 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.321 07:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.888 "name": "raid_bdev1", 00:24:34.888 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:34.888 "strip_size_kb": 0, 00:24:34.888 "state": "online", 00:24:34.888 "raid_level": "raid1", 00:24:34.888 "superblock": true, 00:24:34.888 "num_base_bdevs": 2, 00:24:34.888 "num_base_bdevs_discovered": 1, 00:24:34.888 "num_base_bdevs_operational": 1, 00:24:34.888 "base_bdevs_list": [ 00:24:34.888 { 00:24:34.888 "name": null, 00:24:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.888 "is_configured": false, 00:24:34.888 "data_offset": 0, 00:24:34.888 "data_size": 63488 00:24:34.888 }, 00:24:34.888 { 00:24:34.888 "name": "BaseBdev2", 00:24:34.888 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:34.888 "is_configured": true, 00:24:34.888 "data_offset": 2048, 00:24:34.888 "data_size": 63488 00:24:34.888 } 00:24:34.888 ] 00:24:34.888 }' 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.888 [2024-10-07 07:45:34.321193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:34.888 [2024-10-07 07:45:34.321374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.888 [2024-10-07 07:45:34.321416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:34.888 [2024-10-07 07:45:34.321435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.888 [2024-10-07 07:45:34.321960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.888 [2024-10-07 07:45:34.321984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:34.888 [2024-10-07 07:45:34.322073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:34.888 [2024-10-07 07:45:34.322089] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:34.888 [2024-10-07 07:45:34.322104] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:34.888 [2024-10-07 07:45:34.322120] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:34.888 BaseBdev1 00:24:34.888 07:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:34.889 07:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.825 "name": "raid_bdev1", 00:24:35.825 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:35.825 "strip_size_kb": 0, 00:24:35.825 "state": "online", 00:24:35.825 "raid_level": "raid1", 00:24:35.825 "superblock": true, 00:24:35.825 "num_base_bdevs": 2, 00:24:35.825 "num_base_bdevs_discovered": 1, 00:24:35.825 "num_base_bdevs_operational": 1, 00:24:35.825 "base_bdevs_list": [ 00:24:35.825 { 00:24:35.825 "name": null, 00:24:35.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.825 "is_configured": false, 00:24:35.825 "data_offset": 0, 00:24:35.825 "data_size": 63488 00:24:35.825 }, 00:24:35.825 { 00:24:35.825 "name": "BaseBdev2", 00:24:35.825 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:35.825 "is_configured": true, 00:24:35.825 "data_offset": 2048, 00:24:35.825 "data_size": 63488 00:24:35.825 } 00:24:35.825 ] 00:24:35.825 }' 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.825 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:36.391 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:36.392 "name": "raid_bdev1", 00:24:36.392 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:36.392 "strip_size_kb": 0, 00:24:36.392 "state": "online", 00:24:36.392 "raid_level": "raid1", 00:24:36.392 "superblock": true, 00:24:36.392 "num_base_bdevs": 2, 00:24:36.392 "num_base_bdevs_discovered": 1, 00:24:36.392 "num_base_bdevs_operational": 1, 00:24:36.392 "base_bdevs_list": [ 00:24:36.392 { 00:24:36.392 "name": null, 00:24:36.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.392 "is_configured": false, 00:24:36.392 "data_offset": 0, 00:24:36.392 "data_size": 63488 00:24:36.392 }, 00:24:36.392 { 00:24:36.392 "name": "BaseBdev2", 00:24:36.392 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:36.392 "is_configured": true, 00:24:36.392 "data_offset": 2048, 00:24:36.392 "data_size": 63488 00:24:36.392 } 00:24:36.392 ] 00:24:36.392 }' 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # local es=0 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:36.392 [2024-10-07 07:45:35.889661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:36.392 [2024-10-07 07:45:35.890010] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:36.392 [2024-10-07 07:45:35.890040] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:36.392 request: 00:24:36.392 { 00:24:36.392 "base_bdev": "BaseBdev1", 00:24:36.392 "raid_bdev": "raid_bdev1", 00:24:36.392 "method": "bdev_raid_add_base_bdev", 00:24:36.392 "req_id": 1 00:24:36.392 } 00:24:36.392 Got JSON-RPC error response 00:24:36.392 response: 00:24:36.392 { 00:24:36.392 "code": -22, 00:24:36.392 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:36.392 } 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@656 -- # es=1 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:24:36.392 07:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:37.768 "name": "raid_bdev1", 00:24:37.768 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:37.768 "strip_size_kb": 0, 00:24:37.768 "state": "online", 00:24:37.768 "raid_level": "raid1", 00:24:37.768 "superblock": true, 00:24:37.768 "num_base_bdevs": 2, 00:24:37.768 "num_base_bdevs_discovered": 1, 00:24:37.768 "num_base_bdevs_operational": 1, 00:24:37.768 "base_bdevs_list": [ 00:24:37.768 { 00:24:37.768 "name": null, 00:24:37.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.768 "is_configured": false, 00:24:37.768 "data_offset": 0, 00:24:37.768 "data_size": 63488 00:24:37.768 }, 00:24:37.768 { 00:24:37.768 "name": "BaseBdev2", 00:24:37.768 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:37.768 "is_configured": true, 00:24:37.768 "data_offset": 2048, 00:24:37.768 "data_size": 63488 00:24:37.768 } 00:24:37.768 ] 00:24:37.768 }' 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:37.768 07:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:38.026 "name": "raid_bdev1", 00:24:38.026 "uuid": "d3ed975b-2c8f-4a2e-94cc-90988f2192a0", 00:24:38.026 "strip_size_kb": 0, 00:24:38.026 "state": "online", 00:24:38.026 "raid_level": "raid1", 00:24:38.026 "superblock": true, 00:24:38.026 "num_base_bdevs": 2, 00:24:38.026 "num_base_bdevs_discovered": 1, 00:24:38.026 "num_base_bdevs_operational": 1, 00:24:38.026 "base_bdevs_list": [ 00:24:38.026 { 00:24:38.026 "name": null, 00:24:38.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.026 "is_configured": false, 00:24:38.026 "data_offset": 0, 00:24:38.026 "data_size": 63488 00:24:38.026 }, 00:24:38.026 { 00:24:38.026 "name": "BaseBdev2", 00:24:38.026 "uuid": "1c1531ae-b9c2-5fd6-add9-fcb66d2e180b", 00:24:38.026 "is_configured": true, 00:24:38.026 "data_offset": 2048, 00:24:38.026 "data_size": 63488 00:24:38.026 } 00:24:38.026 ] 00:24:38.026 }' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75937 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' -z 75937 ']' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # kill -0 75937 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # uname 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 75937 00:24:38.026 killing process with pid 75937 00:24:38.026 Received shutdown signal, test time was about 60.000000 seconds 00:24:38.026 00:24:38.026 Latency(us) 00:24:38.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.026 =================================================================================================================== 00:24:38.026 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 75937' 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # kill 75937 00:24:38.026 [2024-10-07 07:45:37.527453] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.026 [2024-10-07 07:45:37.527594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.026 [2024-10-07 07:45:37.527648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.026 [2024-10-07 07:45:37.527663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:38.026 07:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@977 -- # wait 75937 00:24:38.627 [2024-10-07 07:45:37.876480] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:40.009 00:24:40.009 real 0m25.037s 00:24:40.009 user 0m29.597s 00:24:40.009 sys 0m4.771s 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.009 ************************************ 00:24:40.009 END TEST raid_rebuild_test_sb 00:24:40.009 ************************************ 00:24:40.009 07:45:39 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:24:40.009 07:45:39 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:24:40.009 07:45:39 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:40.009 07:45:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:40.009 ************************************ 00:24:40.009 START TEST raid_rebuild_test_io 00:24:40.009 ************************************ 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 false true true 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76678 00:24:40.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76678 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # '[' -z 76678 ']' 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:40.009 07:45:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:40.009 Zero copy mechanism will not be used. 00:24:40.009 [2024-10-07 07:45:39.412085] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:40.009 [2024-10-07 07:45:39.412268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76678 ] 00:24:40.268 [2024-10-07 07:45:39.597429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.527 [2024-10-07 07:45:39.878738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.785 [2024-10-07 07:45:40.099741] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:40.785 [2024-10-07 07:45:40.099801] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # return 0 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.045 BaseBdev1_malloc 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:41.045 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 [2024-10-07 07:45:40.428100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:41.046 [2024-10-07 07:45:40.428345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.046 [2024-10-07 07:45:40.428390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:41.046 [2024-10-07 07:45:40.428412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.046 [2024-10-07 07:45:40.431265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.046 [2024-10-07 07:45:40.431318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:41.046 BaseBdev1 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 BaseBdev2_malloc 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 [2024-10-07 07:45:40.493350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:41.046 [2024-10-07 07:45:40.493579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.046 [2024-10-07 07:45:40.493616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:41.046 [2024-10-07 07:45:40.493635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.046 [2024-10-07 07:45:40.496273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.046 [2024-10-07 07:45:40.496331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:41.046 BaseBdev2 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 spare_malloc 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 spare_delay 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 [2024-10-07 07:45:40.559658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:41.046 [2024-10-07 07:45:40.559878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.046 [2024-10-07 07:45:40.559944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:41.046 [2024-10-07 07:45:40.560080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.046 [2024-10-07 07:45:40.562811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.046 [2024-10-07 07:45:40.562858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:41.046 spare 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 [2024-10-07 07:45:40.567697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:41.046 [2024-10-07 07:45:40.570099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:41.046 [2024-10-07 07:45:40.570351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:41.046 [2024-10-07 07:45:40.570408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:41.046 [2024-10-07 07:45:40.570867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:41.046 [2024-10-07 07:45:40.571165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:41.046 [2024-10-07 07:45:40.571284] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:41.046 [2024-10-07 07:45:40.571638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.046 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.305 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.305 "name": "raid_bdev1", 00:24:41.305 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:41.305 "strip_size_kb": 0, 00:24:41.305 "state": "online", 00:24:41.305 "raid_level": "raid1", 00:24:41.305 "superblock": false, 00:24:41.305 "num_base_bdevs": 2, 00:24:41.305 "num_base_bdevs_discovered": 2, 00:24:41.305 "num_base_bdevs_operational": 2, 00:24:41.305 "base_bdevs_list": [ 00:24:41.305 { 00:24:41.305 "name": "BaseBdev1", 00:24:41.305 "uuid": "69b95981-f8c6-553f-b1d6-41d6123a8242", 00:24:41.306 "is_configured": true, 00:24:41.306 "data_offset": 0, 00:24:41.306 "data_size": 65536 00:24:41.306 }, 00:24:41.306 { 00:24:41.306 "name": "BaseBdev2", 00:24:41.306 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:41.306 "is_configured": true, 00:24:41.306 "data_offset": 0, 00:24:41.306 "data_size": 65536 00:24:41.306 } 00:24:41.306 ] 00:24:41.306 }' 00:24:41.306 07:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.306 07:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.565 [2024-10-07 07:45:41.024123] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.565 [2024-10-07 07:45:41.115839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.565 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.824 "name": "raid_bdev1", 00:24:41.824 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:41.824 "strip_size_kb": 0, 00:24:41.824 "state": "online", 00:24:41.824 "raid_level": "raid1", 00:24:41.824 "superblock": false, 00:24:41.824 "num_base_bdevs": 2, 00:24:41.824 "num_base_bdevs_discovered": 1, 00:24:41.824 "num_base_bdevs_operational": 1, 00:24:41.824 "base_bdevs_list": [ 00:24:41.824 { 00:24:41.824 "name": null, 00:24:41.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.824 "is_configured": false, 00:24:41.824 "data_offset": 0, 00:24:41.824 "data_size": 65536 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "name": "BaseBdev2", 00:24:41.824 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:41.824 "is_configured": true, 00:24:41.824 "data_offset": 0, 00:24:41.824 "data_size": 65536 00:24:41.824 } 00:24:41.824 ] 00:24:41.824 }' 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.824 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.824 [2024-10-07 07:45:41.264682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:41.824 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:41.824 Zero copy mechanism will not be used. 00:24:41.824 Running I/O for 60 seconds... 00:24:42.083 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.083 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:42.083 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.083 [2024-10-07 07:45:41.578191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.083 07:45:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:42.083 07:45:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:42.083 [2024-10-07 07:45:41.637113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:42.083 [2024-10-07 07:45:41.639701] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.384 [2024-10-07 07:45:41.741655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.384 [2024-10-07 07:45:41.742289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.652 [2024-10-07 07:45:41.962153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:42.652 [2024-10-07 07:45:41.962729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:42.911 182.00 IOPS, 546.00 MiB/s [2024-10-07 07:45:42.350670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:42.911 [2024-10-07 07:45:42.351309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.168 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.169 [2024-10-07 07:45:42.677589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:43.169 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.169 "name": "raid_bdev1", 00:24:43.169 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:43.169 "strip_size_kb": 0, 00:24:43.169 "state": "online", 00:24:43.169 "raid_level": "raid1", 00:24:43.169 "superblock": false, 00:24:43.169 "num_base_bdevs": 2, 00:24:43.169 "num_base_bdevs_discovered": 2, 00:24:43.169 "num_base_bdevs_operational": 2, 00:24:43.169 "process": { 00:24:43.169 "type": "rebuild", 00:24:43.169 "target": "spare", 00:24:43.169 "progress": { 00:24:43.169 "blocks": 12288, 00:24:43.169 "percent": 18 00:24:43.169 } 00:24:43.169 }, 00:24:43.169 "base_bdevs_list": [ 00:24:43.169 { 00:24:43.169 "name": "spare", 00:24:43.169 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:43.169 "is_configured": true, 00:24:43.169 "data_offset": 0, 00:24:43.169 "data_size": 65536 00:24:43.169 }, 00:24:43.169 { 00:24:43.169 "name": "BaseBdev2", 00:24:43.169 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:43.169 "is_configured": true, 00:24:43.169 "data_offset": 0, 00:24:43.169 "data_size": 65536 00:24:43.169 } 00:24:43.169 ] 00:24:43.169 }' 00:24:43.169 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.428 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.428 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.428 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.428 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.429 [2024-10-07 07:45:42.778338] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.429 [2024-10-07 07:45:42.786736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:43.429 [2024-10-07 07:45:42.787188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:43.429 [2024-10-07 07:45:42.788752] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:43.429 [2024-10-07 07:45:42.804314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.429 [2024-10-07 07:45:42.804361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.429 [2024-10-07 07:45:42.804376] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:43.429 [2024-10-07 07:45:42.846222] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.429 "name": "raid_bdev1", 00:24:43.429 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:43.429 "strip_size_kb": 0, 00:24:43.429 "state": "online", 00:24:43.429 "raid_level": "raid1", 00:24:43.429 "superblock": false, 00:24:43.429 "num_base_bdevs": 2, 00:24:43.429 "num_base_bdevs_discovered": 1, 00:24:43.429 "num_base_bdevs_operational": 1, 00:24:43.429 "base_bdevs_list": [ 00:24:43.429 { 00:24:43.429 "name": null, 00:24:43.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.429 "is_configured": false, 00:24:43.429 "data_offset": 0, 00:24:43.429 "data_size": 65536 00:24:43.429 }, 00:24:43.429 { 00:24:43.429 "name": "BaseBdev2", 00:24:43.429 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:43.429 "is_configured": true, 00:24:43.429 "data_offset": 0, 00:24:43.429 "data_size": 65536 00:24:43.429 } 00:24:43.429 ] 00:24:43.429 }' 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.429 07:45:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.997 180.00 IOPS, 540.00 MiB/s 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.997 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.997 "name": "raid_bdev1", 00:24:43.997 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:43.997 "strip_size_kb": 0, 00:24:43.997 "state": "online", 00:24:43.997 "raid_level": "raid1", 00:24:43.997 "superblock": false, 00:24:43.997 "num_base_bdevs": 2, 00:24:43.997 "num_base_bdevs_discovered": 1, 00:24:43.997 "num_base_bdevs_operational": 1, 00:24:43.997 "base_bdevs_list": [ 00:24:43.997 { 00:24:43.997 "name": null, 00:24:43.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.997 "is_configured": false, 00:24:43.997 "data_offset": 0, 00:24:43.997 "data_size": 65536 00:24:43.997 }, 00:24:43.998 { 00:24:43.998 "name": "BaseBdev2", 00:24:43.998 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:43.998 "is_configured": true, 00:24:43.998 "data_offset": 0, 00:24:43.998 "data_size": 65536 00:24:43.998 } 00:24:43.998 ] 00:24:43.998 }' 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.998 [2024-10-07 07:45:43.433488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:43.998 07:45:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:43.998 [2024-10-07 07:45:43.511808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:43.998 [2024-10-07 07:45:43.514163] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:44.257 [2024-10-07 07:45:43.635216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:44.257 [2024-10-07 07:45:43.756655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:44.824 [2024-10-07 07:45:44.090675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:44.824 [2024-10-07 07:45:44.207354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:44.824 [2024-10-07 07:45:44.207725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:45.082 180.67 IOPS, 542.00 MiB/s 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.082 "name": "raid_bdev1", 00:24:45.082 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:45.082 "strip_size_kb": 0, 00:24:45.082 "state": "online", 00:24:45.082 "raid_level": "raid1", 00:24:45.082 "superblock": false, 00:24:45.082 "num_base_bdevs": 2, 00:24:45.082 "num_base_bdevs_discovered": 2, 00:24:45.082 "num_base_bdevs_operational": 2, 00:24:45.082 "process": { 00:24:45.082 "type": "rebuild", 00:24:45.082 "target": "spare", 00:24:45.082 "progress": { 00:24:45.082 "blocks": 12288, 00:24:45.082 "percent": 18 00:24:45.082 } 00:24:45.082 }, 00:24:45.082 "base_bdevs_list": [ 00:24:45.082 { 00:24:45.082 "name": "spare", 00:24:45.082 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:45.082 "is_configured": true, 00:24:45.082 "data_offset": 0, 00:24:45.082 "data_size": 65536 00:24:45.082 }, 00:24:45.082 { 00:24:45.082 "name": "BaseBdev2", 00:24:45.082 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:45.082 "is_configured": true, 00:24:45.082 "data_offset": 0, 00:24:45.082 "data_size": 65536 00:24:45.082 } 00:24:45.082 ] 00:24:45.082 }' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=434 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.082 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.340 [2024-10-07 07:45:44.654920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.340 "name": "raid_bdev1", 00:24:45.340 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:45.340 "strip_size_kb": 0, 00:24:45.340 "state": "online", 00:24:45.340 "raid_level": "raid1", 00:24:45.340 "superblock": false, 00:24:45.340 "num_base_bdevs": 2, 00:24:45.340 "num_base_bdevs_discovered": 2, 00:24:45.340 "num_base_bdevs_operational": 2, 00:24:45.340 "process": { 00:24:45.340 "type": "rebuild", 00:24:45.340 "target": "spare", 00:24:45.340 "progress": { 00:24:45.340 "blocks": 14336, 00:24:45.340 "percent": 21 00:24:45.340 } 00:24:45.340 }, 00:24:45.340 "base_bdevs_list": [ 00:24:45.340 { 00:24:45.340 "name": "spare", 00:24:45.340 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:45.340 "is_configured": true, 00:24:45.340 "data_offset": 0, 00:24:45.340 "data_size": 65536 00:24:45.340 }, 00:24:45.340 { 00:24:45.340 "name": "BaseBdev2", 00:24:45.340 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:45.340 "is_configured": true, 00:24:45.340 "data_offset": 0, 00:24:45.340 "data_size": 65536 00:24:45.340 } 00:24:45.340 ] 00:24:45.340 }' 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.340 07:45:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:45.599 [2024-10-07 07:45:45.027943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:46.116 154.25 IOPS, 462.75 MiB/s [2024-10-07 07:45:45.498341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:46.116 [2024-10-07 07:45:45.607119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:46.377 [2024-10-07 07:45:45.824123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:46.377 "name": "raid_bdev1", 00:24:46.377 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:46.377 "strip_size_kb": 0, 00:24:46.377 "state": "online", 00:24:46.377 "raid_level": "raid1", 00:24:46.377 "superblock": false, 00:24:46.377 "num_base_bdevs": 2, 00:24:46.377 "num_base_bdevs_discovered": 2, 00:24:46.377 "num_base_bdevs_operational": 2, 00:24:46.377 "process": { 00:24:46.377 "type": "rebuild", 00:24:46.377 "target": "spare", 00:24:46.377 "progress": { 00:24:46.377 "blocks": 30720, 00:24:46.377 "percent": 46 00:24:46.377 } 00:24:46.377 }, 00:24:46.377 "base_bdevs_list": [ 00:24:46.377 { 00:24:46.377 "name": "spare", 00:24:46.377 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:46.377 "is_configured": true, 00:24:46.377 "data_offset": 0, 00:24:46.377 "data_size": 65536 00:24:46.377 }, 00:24:46.377 { 00:24:46.377 "name": "BaseBdev2", 00:24:46.377 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:46.377 "is_configured": true, 00:24:46.377 "data_offset": 0, 00:24:46.377 "data_size": 65536 00:24:46.377 } 00:24:46.377 ] 00:24:46.377 }' 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.377 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.637 07:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:46.637 [2024-10-07 07:45:46.046931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:46.896 133.40 IOPS, 400.20 MiB/s [2024-10-07 07:45:46.370212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:47.466 [2024-10-07 07:45:46.727519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.466 [2024-10-07 07:45:46.958795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.466 "name": "raid_bdev1", 00:24:47.466 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:47.466 "strip_size_kb": 0, 00:24:47.466 "state": "online", 00:24:47.466 "raid_level": "raid1", 00:24:47.466 "superblock": false, 00:24:47.466 "num_base_bdevs": 2, 00:24:47.466 "num_base_bdevs_discovered": 2, 00:24:47.466 "num_base_bdevs_operational": 2, 00:24:47.466 "process": { 00:24:47.466 "type": "rebuild", 00:24:47.466 "target": "spare", 00:24:47.466 "progress": { 00:24:47.466 "blocks": 49152, 00:24:47.466 "percent": 75 00:24:47.466 } 00:24:47.466 }, 00:24:47.466 "base_bdevs_list": [ 00:24:47.466 { 00:24:47.466 "name": "spare", 00:24:47.466 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:47.466 "is_configured": true, 00:24:47.466 "data_offset": 0, 00:24:47.466 "data_size": 65536 00:24:47.466 }, 00:24:47.466 { 00:24:47.466 "name": "BaseBdev2", 00:24:47.466 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:47.466 "is_configured": true, 00:24:47.466 "data_offset": 0, 00:24:47.466 "data_size": 65536 00:24:47.466 } 00:24:47.466 ] 00:24:47.466 }' 00:24:47.466 07:45:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.725 07:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.725 07:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.725 07:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.725 07:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:47.725 [2024-10-07 07:45:47.169037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:47.725 [2024-10-07 07:45:47.169354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:48.716 118.17 IOPS, 354.50 MiB/s [2024-10-07 07:45:47.948747] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:48.716 [2024-10-07 07:45:48.048762] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:48.716 [2024-10-07 07:45:48.050921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.716 "name": "raid_bdev1", 00:24:48.716 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:48.716 "strip_size_kb": 0, 00:24:48.716 "state": "online", 00:24:48.716 "raid_level": "raid1", 00:24:48.716 "superblock": false, 00:24:48.716 "num_base_bdevs": 2, 00:24:48.716 "num_base_bdevs_discovered": 2, 00:24:48.716 "num_base_bdevs_operational": 2, 00:24:48.716 "base_bdevs_list": [ 00:24:48.716 { 00:24:48.716 "name": "spare", 00:24:48.716 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:48.716 "is_configured": true, 00:24:48.716 "data_offset": 0, 00:24:48.716 "data_size": 65536 00:24:48.716 }, 00:24:48.716 { 00:24:48.716 "name": "BaseBdev2", 00:24:48.716 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:48.716 "is_configured": true, 00:24:48.716 "data_offset": 0, 00:24:48.716 "data_size": 65536 00:24:48.716 } 00:24:48.716 ] 00:24:48.716 }' 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:48.716 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.716 "name": "raid_bdev1", 00:24:48.716 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:48.716 "strip_size_kb": 0, 00:24:48.716 "state": "online", 00:24:48.716 "raid_level": "raid1", 00:24:48.716 "superblock": false, 00:24:48.716 "num_base_bdevs": 2, 00:24:48.716 "num_base_bdevs_discovered": 2, 00:24:48.716 "num_base_bdevs_operational": 2, 00:24:48.716 "base_bdevs_list": [ 00:24:48.716 { 00:24:48.716 "name": "spare", 00:24:48.716 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:48.716 "is_configured": true, 00:24:48.716 "data_offset": 0, 00:24:48.716 "data_size": 65536 00:24:48.716 }, 00:24:48.716 { 00:24:48.716 "name": "BaseBdev2", 00:24:48.716 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:48.716 "is_configured": true, 00:24:48.716 "data_offset": 0, 00:24:48.716 "data_size": 65536 00:24:48.716 } 00:24:48.716 ] 00:24:48.716 }' 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.979 106.86 IOPS, 320.57 MiB/s 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.979 "name": "raid_bdev1", 00:24:48.979 "uuid": "b8a57a74-3ab2-40ac-83a3-574a99a3dbcb", 00:24:48.979 "strip_size_kb": 0, 00:24:48.979 "state": "online", 00:24:48.979 "raid_level": "raid1", 00:24:48.979 "superblock": false, 00:24:48.979 "num_base_bdevs": 2, 00:24:48.979 "num_base_bdevs_discovered": 2, 00:24:48.979 "num_base_bdevs_operational": 2, 00:24:48.979 "base_bdevs_list": [ 00:24:48.979 { 00:24:48.979 "name": "spare", 00:24:48.979 "uuid": "55414daa-853a-53c2-91f3-96aa633c957d", 00:24:48.979 "is_configured": true, 00:24:48.979 "data_offset": 0, 00:24:48.979 "data_size": 65536 00:24:48.979 }, 00:24:48.979 { 00:24:48.979 "name": "BaseBdev2", 00:24:48.979 "uuid": "d294999c-f5bb-5d27-bfa0-84b68a30a7a2", 00:24:48.979 "is_configured": true, 00:24:48.979 "data_offset": 0, 00:24:48.979 "data_size": 65536 00:24:48.979 } 00:24:48.979 ] 00:24:48.979 }' 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.979 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.549 [2024-10-07 07:45:48.809916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.549 [2024-10-07 07:45:48.809956] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.549 00:24:49.549 Latency(us) 00:24:49.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.549 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:49.549 raid_bdev1 : 7.58 101.73 305.20 0.00 0.00 13445.71 317.93 131820.98 00:24:49.549 =================================================================================================================== 00:24:49.549 Total : 101.73 305.20 0.00 0.00 13445.71 317.93 131820.98 00:24:49.549 [2024-10-07 07:45:48.868832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.549 [2024-10-07 07:45:48.869019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.549 [2024-10-07 07:45:48.869144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.549 [2024-10-07 07:45:48.869262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:49.549 { 00:24:49.549 "results": [ 00:24:49.549 { 00:24:49.549 "job": "raid_bdev1", 00:24:49.549 "core_mask": "0x1", 00:24:49.549 "workload": "randrw", 00:24:49.549 "percentage": 50, 00:24:49.549 "status": "finished", 00:24:49.549 "queue_depth": 2, 00:24:49.549 "io_size": 3145728, 00:24:49.549 "runtime": 7.578651, 00:24:49.549 "iops": 101.73314485651866, 00:24:49.549 "mibps": 305.199434569556, 00:24:49.549 "io_failed": 0, 00:24:49.549 "io_timeout": 0, 00:24:49.549 "avg_latency_us": 13445.706281267372, 00:24:49.549 "min_latency_us": 317.92761904761903, 00:24:49.549 "max_latency_us": 131820.98285714287 00:24:49.549 } 00:24:49.549 ], 00:24:49.549 "core_count": 1 00:24:49.549 } 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.549 07:45:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:49.808 /dev/nbd0 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local i 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # break 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.808 1+0 records in 00:24:49.808 1+0 records out 00:24:49.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301711 s, 13.6 MB/s 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # size=4096 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # return 0 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.808 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:50.068 /dev/nbd1 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local i 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # break 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.068 1+0 records in 00:24:50.068 1+0 records out 00:24:50.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618313 s, 6.6 MB/s 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # size=4096 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # return 0 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:50.068 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.327 07:45:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.586 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.845 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76678 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' -z 76678 ']' 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # kill -0 76678 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # uname 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 76678 00:24:51.105 killing process with pid 76678 00:24:51.105 Received shutdown signal, test time was about 9.182057 seconds 00:24:51.105 00:24:51.105 Latency(us) 00:24:51.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.105 =================================================================================================================== 00:24:51.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # echo 'killing process with pid 76678' 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # kill 76678 00:24:51.105 [2024-10-07 07:45:50.449221] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.105 07:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@977 -- # wait 76678 00:24:51.364 [2024-10-07 07:45:50.689933] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.742 ************************************ 00:24:52.742 END TEST raid_rebuild_test_io 00:24:52.742 ************************************ 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:24:52.742 00:24:52.742 real 0m12.827s 00:24:52.742 user 0m16.227s 00:24:52.742 sys 0m1.761s 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # xtrace_disable 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.742 07:45:52 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:24:52.742 07:45:52 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:24:52.742 07:45:52 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:24:52.742 07:45:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.742 ************************************ 00:24:52.742 START TEST raid_rebuild_test_sb_io 00:24:52.742 ************************************ 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 true true true 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77066 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77066 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # '[' -z 77066 ']' 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local max_retries=100 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@843 -- # xtrace_disable 00:24:52.742 07:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.742 [2024-10-07 07:45:52.293533] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:24:52.742 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:52.742 Zero copy mechanism will not be used. 00:24:52.742 [2024-10-07 07:45:52.293922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77066 ] 00:24:53.002 [2024-10-07 07:45:52.476773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.261 [2024-10-07 07:45:52.700779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.520 [2024-10-07 07:45:52.928226] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.520 [2024-10-07 07:45:52.928261] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # return 0 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 BaseBdev1_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 [2024-10-07 07:45:53.209195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:53.780 [2024-10-07 07:45:53.209419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.780 [2024-10-07 07:45:53.209486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:53.780 [2024-10-07 07:45:53.209588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.780 [2024-10-07 07:45:53.212177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.780 BaseBdev1 00:24:53.780 [2024-10-07 07:45:53.212345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 BaseBdev2_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 [2024-10-07 07:45:53.276681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:53.780 [2024-10-07 07:45:53.276915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.780 [2024-10-07 07:45:53.276980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:53.780 [2024-10-07 07:45:53.277084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.780 [2024-10-07 07:45:53.279684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.780 [2024-10-07 07:45:53.279739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:53.780 BaseBdev2 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 spare_malloc 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 spare_delay 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:53.780 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.780 [2024-10-07 07:45:53.337247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:53.780 [2024-10-07 07:45:53.337427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.780 [2024-10-07 07:45:53.337458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:53.780 [2024-10-07 07:45:53.337474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.039 [2024-10-07 07:45:53.339971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.039 [2024-10-07 07:45:53.340013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.039 spare 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.039 [2024-10-07 07:45:53.345314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.039 [2024-10-07 07:45:53.347709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:54.039 [2024-10-07 07:45:53.348018] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:54.039 [2024-10-07 07:45:53.348075] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:54.039 [2024-10-07 07:45:53.348463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:54.039 [2024-10-07 07:45:53.348753] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:54.039 [2024-10-07 07:45:53.348862] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:54.039 [2024-10-07 07:45:53.349141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.039 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.040 "name": "raid_bdev1", 00:24:54.040 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:54.040 "strip_size_kb": 0, 00:24:54.040 "state": "online", 00:24:54.040 "raid_level": "raid1", 00:24:54.040 "superblock": true, 00:24:54.040 "num_base_bdevs": 2, 00:24:54.040 "num_base_bdevs_discovered": 2, 00:24:54.040 "num_base_bdevs_operational": 2, 00:24:54.040 "base_bdevs_list": [ 00:24:54.040 { 00:24:54.040 "name": "BaseBdev1", 00:24:54.040 "uuid": "04f3a0dd-cdb2-58b8-8297-e3782032af5f", 00:24:54.040 "is_configured": true, 00:24:54.040 "data_offset": 2048, 00:24:54.040 "data_size": 63488 00:24:54.040 }, 00:24:54.040 { 00:24:54.040 "name": "BaseBdev2", 00:24:54.040 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:54.040 "is_configured": true, 00:24:54.040 "data_offset": 2048, 00:24:54.040 "data_size": 63488 00:24:54.040 } 00:24:54.040 ] 00:24:54.040 }' 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.040 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.299 [2024-10-07 07:45:53.781832] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.299 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.558 [2024-10-07 07:45:53.861490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:54.558 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.558 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.558 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.559 "name": "raid_bdev1", 00:24:54.559 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:54.559 "strip_size_kb": 0, 00:24:54.559 "state": "online", 00:24:54.559 "raid_level": "raid1", 00:24:54.559 "superblock": true, 00:24:54.559 "num_base_bdevs": 2, 00:24:54.559 "num_base_bdevs_discovered": 1, 00:24:54.559 "num_base_bdevs_operational": 1, 00:24:54.559 "base_bdevs_list": [ 00:24:54.559 { 00:24:54.559 "name": null, 00:24:54.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.559 "is_configured": false, 00:24:54.559 "data_offset": 0, 00:24:54.559 "data_size": 63488 00:24:54.559 }, 00:24:54.559 { 00:24:54.559 "name": "BaseBdev2", 00:24:54.559 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:54.559 "is_configured": true, 00:24:54.559 "data_offset": 2048, 00:24:54.559 "data_size": 63488 00:24:54.559 } 00:24:54.559 ] 00:24:54.559 }' 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.559 07:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.559 [2024-10-07 07:45:53.989949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:54.559 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:54.559 Zero copy mechanism will not be used. 00:24:54.559 Running I/O for 60 seconds... 00:24:54.858 07:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:54.858 07:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:54.858 07:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.858 [2024-10-07 07:45:54.323058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.858 07:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:54.858 07:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:54.858 [2024-10-07 07:45:54.372815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:54.858 [2024-10-07 07:45:54.375176] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.116 [2024-10-07 07:45:54.484300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:55.116 [2024-10-07 07:45:54.484937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:55.375 [2024-10-07 07:45:54.701354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:55.375 [2024-10-07 07:45:54.701716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:55.633 154.00 IOPS, 462.00 MiB/s [2024-10-07 07:45:55.074572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:55.633 [2024-10-07 07:45:55.074958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.892 "name": "raid_bdev1", 00:24:55.892 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:55.892 "strip_size_kb": 0, 00:24:55.892 "state": "online", 00:24:55.892 "raid_level": "raid1", 00:24:55.892 "superblock": true, 00:24:55.892 "num_base_bdevs": 2, 00:24:55.892 "num_base_bdevs_discovered": 2, 00:24:55.892 "num_base_bdevs_operational": 2, 00:24:55.892 "process": { 00:24:55.892 "type": "rebuild", 00:24:55.892 "target": "spare", 00:24:55.892 "progress": { 00:24:55.892 "blocks": 14336, 00:24:55.892 "percent": 22 00:24:55.892 } 00:24:55.892 }, 00:24:55.892 "base_bdevs_list": [ 00:24:55.892 { 00:24:55.892 "name": "spare", 00:24:55.892 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:24:55.892 "is_configured": true, 00:24:55.892 "data_offset": 2048, 00:24:55.892 "data_size": 63488 00:24:55.892 }, 00:24:55.892 { 00:24:55.892 "name": "BaseBdev2", 00:24:55.892 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:55.892 "is_configured": true, 00:24:55.892 "data_offset": 2048, 00:24:55.892 "data_size": 63488 00:24:55.892 } 00:24:55.892 ] 00:24:55.892 }' 00:24:55.892 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:56.151 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.151 [2024-10-07 07:45:55.522699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.409 [2024-10-07 07:45:55.740667] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:56.410 [2024-10-07 07:45:55.750466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.410 [2024-10-07 07:45:55.750513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.410 [2024-10-07 07:45:55.750533] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:56.410 [2024-10-07 07:45:55.787622] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.410 "name": "raid_bdev1", 00:24:56.410 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:56.410 "strip_size_kb": 0, 00:24:56.410 "state": "online", 00:24:56.410 "raid_level": "raid1", 00:24:56.410 "superblock": true, 00:24:56.410 "num_base_bdevs": 2, 00:24:56.410 "num_base_bdevs_discovered": 1, 00:24:56.410 "num_base_bdevs_operational": 1, 00:24:56.410 "base_bdevs_list": [ 00:24:56.410 { 00:24:56.410 "name": null, 00:24:56.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.410 "is_configured": false, 00:24:56.410 "data_offset": 0, 00:24:56.410 "data_size": 63488 00:24:56.410 }, 00:24:56.410 { 00:24:56.410 "name": "BaseBdev2", 00:24:56.410 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:56.410 "is_configured": true, 00:24:56.410 "data_offset": 2048, 00:24:56.410 "data_size": 63488 00:24:56.410 } 00:24:56.410 ] 00:24:56.410 }' 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.410 07:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.929 135.50 IOPS, 406.50 MiB/s 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:56.929 "name": "raid_bdev1", 00:24:56.929 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:56.929 "strip_size_kb": 0, 00:24:56.929 "state": "online", 00:24:56.929 "raid_level": "raid1", 00:24:56.929 "superblock": true, 00:24:56.929 "num_base_bdevs": 2, 00:24:56.929 "num_base_bdevs_discovered": 1, 00:24:56.929 "num_base_bdevs_operational": 1, 00:24:56.929 "base_bdevs_list": [ 00:24:56.929 { 00:24:56.929 "name": null, 00:24:56.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.929 "is_configured": false, 00:24:56.929 "data_offset": 0, 00:24:56.929 "data_size": 63488 00:24:56.929 }, 00:24:56.929 { 00:24:56.929 "name": "BaseBdev2", 00:24:56.929 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:56.929 "is_configured": true, 00:24:56.929 "data_offset": 2048, 00:24:56.929 "data_size": 63488 00:24:56.929 } 00:24:56.929 ] 00:24:56.929 }' 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.929 [2024-10-07 07:45:56.358579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:56.929 07:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:56.929 [2024-10-07 07:45:56.414586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:56.929 [2024-10-07 07:45:56.417054] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.189 [2024-10-07 07:45:56.525367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:57.189 [2024-10-07 07:45:56.526004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:57.189 [2024-10-07 07:45:56.742353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:57.189 [2024-10-07 07:45:56.742732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:57.756 145.67 IOPS, 437.00 MiB/s [2024-10-07 07:45:57.056953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:57.756 [2024-10-07 07:45:57.057491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:57.756 [2024-10-07 07:45:57.267468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:57.756 [2024-10-07 07:45:57.268105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.015 "name": "raid_bdev1", 00:24:58.015 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:58.015 "strip_size_kb": 0, 00:24:58.015 "state": "online", 00:24:58.015 "raid_level": "raid1", 00:24:58.015 "superblock": true, 00:24:58.015 "num_base_bdevs": 2, 00:24:58.015 "num_base_bdevs_discovered": 2, 00:24:58.015 "num_base_bdevs_operational": 2, 00:24:58.015 "process": { 00:24:58.015 "type": "rebuild", 00:24:58.015 "target": "spare", 00:24:58.015 "progress": { 00:24:58.015 "blocks": 12288, 00:24:58.015 "percent": 19 00:24:58.015 } 00:24:58.015 }, 00:24:58.015 "base_bdevs_list": [ 00:24:58.015 { 00:24:58.015 "name": "spare", 00:24:58.015 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:24:58.015 "is_configured": true, 00:24:58.015 "data_offset": 2048, 00:24:58.015 "data_size": 63488 00:24:58.015 }, 00:24:58.015 { 00:24:58.015 "name": "BaseBdev2", 00:24:58.015 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:58.015 "is_configured": true, 00:24:58.015 "data_offset": 2048, 00:24:58.015 "data_size": 63488 00:24:58.015 } 00:24:58.015 ] 00:24:58.015 }' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.015 [2024-10-07 07:45:57.518413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:58.015 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=447 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:58.015 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.274 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:58.274 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.274 "name": "raid_bdev1", 00:24:58.274 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:58.274 "strip_size_kb": 0, 00:24:58.274 "state": "online", 00:24:58.274 "raid_level": "raid1", 00:24:58.274 "superblock": true, 00:24:58.274 "num_base_bdevs": 2, 00:24:58.274 "num_base_bdevs_discovered": 2, 00:24:58.274 "num_base_bdevs_operational": 2, 00:24:58.274 "process": { 00:24:58.274 "type": "rebuild", 00:24:58.274 "target": "spare", 00:24:58.274 "progress": { 00:24:58.274 "blocks": 14336, 00:24:58.274 "percent": 22 00:24:58.274 } 00:24:58.274 }, 00:24:58.274 "base_bdevs_list": [ 00:24:58.274 { 00:24:58.274 "name": "spare", 00:24:58.274 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:24:58.274 "is_configured": true, 00:24:58.274 "data_offset": 2048, 00:24:58.274 "data_size": 63488 00:24:58.274 }, 00:24:58.274 { 00:24:58.274 "name": "BaseBdev2", 00:24:58.274 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:58.274 "is_configured": true, 00:24:58.274 "data_offset": 2048, 00:24:58.274 "data_size": 63488 00:24:58.274 } 00:24:58.274 ] 00:24:58.274 }' 00:24:58.274 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.274 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.274 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.275 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.275 07:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:58.533 [2024-10-07 07:45:57.930047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:58.792 132.00 IOPS, 396.00 MiB/s [2024-10-07 07:45:58.149141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:59.050 [2024-10-07 07:45:58.379283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:59.050 [2024-10-07 07:45:58.380131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:59.050 [2024-10-07 07:45:58.589908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.309 "name": "raid_bdev1", 00:24:59.309 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:24:59.309 "strip_size_kb": 0, 00:24:59.309 "state": "online", 00:24:59.309 "raid_level": "raid1", 00:24:59.309 "superblock": true, 00:24:59.309 "num_base_bdevs": 2, 00:24:59.309 "num_base_bdevs_discovered": 2, 00:24:59.309 "num_base_bdevs_operational": 2, 00:24:59.309 "process": { 00:24:59.309 "type": "rebuild", 00:24:59.309 "target": "spare", 00:24:59.309 "progress": { 00:24:59.309 "blocks": 28672, 00:24:59.309 "percent": 45 00:24:59.309 } 00:24:59.309 }, 00:24:59.309 "base_bdevs_list": [ 00:24:59.309 { 00:24:59.309 "name": "spare", 00:24:59.309 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:24:59.309 "is_configured": true, 00:24:59.309 "data_offset": 2048, 00:24:59.309 "data_size": 63488 00:24:59.309 }, 00:24:59.309 { 00:24:59.309 "name": "BaseBdev2", 00:24:59.309 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:24:59.309 "is_configured": true, 00:24:59.309 "data_offset": 2048, 00:24:59.309 "data_size": 63488 00:24:59.309 } 00:24:59.309 ] 00:24:59.309 }' 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.309 07:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:59.568 [2024-10-07 07:45:58.901136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:59.568 [2024-10-07 07:45:58.901762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:59.827 116.60 IOPS, 349.80 MiB/s [2024-10-07 07:45:59.329370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:00.396 "name": "raid_bdev1", 00:25:00.396 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:00.396 "strip_size_kb": 0, 00:25:00.396 "state": "online", 00:25:00.396 "raid_level": "raid1", 00:25:00.396 "superblock": true, 00:25:00.396 "num_base_bdevs": 2, 00:25:00.396 "num_base_bdevs_discovered": 2, 00:25:00.396 "num_base_bdevs_operational": 2, 00:25:00.396 "process": { 00:25:00.396 "type": "rebuild", 00:25:00.396 "target": "spare", 00:25:00.396 "progress": { 00:25:00.396 "blocks": 45056, 00:25:00.396 "percent": 70 00:25:00.396 } 00:25:00.396 }, 00:25:00.396 "base_bdevs_list": [ 00:25:00.396 { 00:25:00.396 "name": "spare", 00:25:00.396 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:00.396 "is_configured": true, 00:25:00.396 "data_offset": 2048, 00:25:00.396 "data_size": 63488 00:25:00.396 }, 00:25:00.396 { 00:25:00.396 "name": "BaseBdev2", 00:25:00.396 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:00.396 "is_configured": true, 00:25:00.396 "data_offset": 2048, 00:25:00.396 "data_size": 63488 00:25:00.396 } 00:25:00.396 ] 00:25:00.396 }' 00:25:00.396 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:00.655 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.655 07:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:00.655 105.17 IOPS, 315.50 MiB/s 07:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.655 07:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:00.655 [2024-10-07 07:46:00.094233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:00.914 [2024-10-07 07:46:00.295933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:01.503 [2024-10-07 07:46:00.852582] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:01.503 [2024-10-07 07:46:00.958986] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:01.503 [2024-10-07 07:46:00.962110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.503 94.57 IOPS, 283.71 MiB/s 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.503 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.762 "name": "raid_bdev1", 00:25:01.762 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:01.762 "strip_size_kb": 0, 00:25:01.762 "state": "online", 00:25:01.762 "raid_level": "raid1", 00:25:01.762 "superblock": true, 00:25:01.762 "num_base_bdevs": 2, 00:25:01.762 "num_base_bdevs_discovered": 2, 00:25:01.762 "num_base_bdevs_operational": 2, 00:25:01.762 "base_bdevs_list": [ 00:25:01.762 { 00:25:01.762 "name": "spare", 00:25:01.762 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:01.762 "is_configured": true, 00:25:01.762 "data_offset": 2048, 00:25:01.762 "data_size": 63488 00:25:01.762 }, 00:25:01.762 { 00:25:01.762 "name": "BaseBdev2", 00:25:01.762 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:01.762 "is_configured": true, 00:25:01.762 "data_offset": 2048, 00:25:01.762 "data_size": 63488 00:25:01.762 } 00:25:01.762 ] 00:25:01.762 }' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.762 "name": "raid_bdev1", 00:25:01.762 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:01.762 "strip_size_kb": 0, 00:25:01.762 "state": "online", 00:25:01.762 "raid_level": "raid1", 00:25:01.762 "superblock": true, 00:25:01.762 "num_base_bdevs": 2, 00:25:01.762 "num_base_bdevs_discovered": 2, 00:25:01.762 "num_base_bdevs_operational": 2, 00:25:01.762 "base_bdevs_list": [ 00:25:01.762 { 00:25:01.762 "name": "spare", 00:25:01.762 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:01.762 "is_configured": true, 00:25:01.762 "data_offset": 2048, 00:25:01.762 "data_size": 63488 00:25:01.762 }, 00:25:01.762 { 00:25:01.762 "name": "BaseBdev2", 00:25:01.762 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:01.762 "is_configured": true, 00:25:01.762 "data_offset": 2048, 00:25:01.762 "data_size": 63488 00:25:01.762 } 00:25:01.762 ] 00:25:01.762 }' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.762 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.022 "name": "raid_bdev1", 00:25:02.022 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:02.022 "strip_size_kb": 0, 00:25:02.022 "state": "online", 00:25:02.022 "raid_level": "raid1", 00:25:02.022 "superblock": true, 00:25:02.022 "num_base_bdevs": 2, 00:25:02.022 "num_base_bdevs_discovered": 2, 00:25:02.022 "num_base_bdevs_operational": 2, 00:25:02.022 "base_bdevs_list": [ 00:25:02.022 { 00:25:02.022 "name": "spare", 00:25:02.022 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:02.022 "is_configured": true, 00:25:02.022 "data_offset": 2048, 00:25:02.022 "data_size": 63488 00:25:02.022 }, 00:25:02.022 { 00:25:02.022 "name": "BaseBdev2", 00:25:02.022 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:02.022 "is_configured": true, 00:25:02.022 "data_offset": 2048, 00:25:02.022 "data_size": 63488 00:25:02.022 } 00:25:02.022 ] 00:25:02.022 }' 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.022 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.281 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:02.281 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:02.281 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.281 [2024-10-07 07:46:01.760048] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.281 [2024-10-07 07:46:01.760085] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:02.541 00:25:02.541 Latency(us) 00:25:02.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.541 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:02.541 raid_bdev1 : 7.85 89.41 268.22 0.00 0.00 14945.01 298.42 115343.36 00:25:02.541 =================================================================================================================== 00:25:02.541 Total : 89.41 268.22 0.00 0.00 14945.01 298.42 115343.36 00:25:02.541 [2024-10-07 07:46:01.866448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.541 { 00:25:02.541 "results": [ 00:25:02.541 { 00:25:02.541 "job": "raid_bdev1", 00:25:02.541 "core_mask": "0x1", 00:25:02.541 "workload": "randrw", 00:25:02.541 "percentage": 50, 00:25:02.541 "status": "finished", 00:25:02.541 "queue_depth": 2, 00:25:02.541 "io_size": 3145728, 00:25:02.541 "runtime": 7.851753, 00:25:02.541 "iops": 89.40678597505551, 00:25:02.541 "mibps": 268.2203579251665, 00:25:02.541 "io_failed": 0, 00:25:02.541 "io_timeout": 0, 00:25:02.541 "avg_latency_us": 14945.008405915072, 00:25:02.541 "min_latency_us": 298.42285714285714, 00:25:02.541 "max_latency_us": 115343.36 00:25:02.541 } 00:25:02.541 ], 00:25:02.541 "core_count": 1 00:25:02.541 } 00:25:02.541 [2024-10-07 07:46:01.866614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.541 [2024-10-07 07:46:01.866703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.541 [2024-10-07 07:46:01.866743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.541 07:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:25:02.801 /dev/nbd0 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local i 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # break 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:02.801 1+0 records in 00:25:02.801 1+0 records out 00:25:02.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268372 s, 15.3 MB/s 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # size=4096 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # return 0 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:02.801 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:03.060 /dev/nbd1 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local i 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # break 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:03.060 1+0 records in 00:25:03.060 1+0 records out 00:25:03.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440066 s, 9.3 MB/s 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # size=4096 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # return 0 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:03.060 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.319 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.579 07:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:03.838 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.838 [2024-10-07 07:46:03.230583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:03.838 [2024-10-07 07:46:03.230789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.838 [2024-10-07 07:46:03.230854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:03.838 [2024-10-07 07:46:03.231025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.838 [2024-10-07 07:46:03.233836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.838 [2024-10-07 07:46:03.233879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:03.838 [2024-10-07 07:46:03.233977] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:03.839 [2024-10-07 07:46:03.234029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:03.839 [2024-10-07 07:46:03.234181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.839 spare 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.839 [2024-10-07 07:46:03.334292] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:03.839 [2024-10-07 07:46:03.334509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:03.839 [2024-10-07 07:46:03.334973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:25:03.839 [2024-10-07 07:46:03.335301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:03.839 [2024-10-07 07:46:03.335423] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:03.839 [2024-10-07 07:46:03.335664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.839 "name": "raid_bdev1", 00:25:03.839 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:03.839 "strip_size_kb": 0, 00:25:03.839 "state": "online", 00:25:03.839 "raid_level": "raid1", 00:25:03.839 "superblock": true, 00:25:03.839 "num_base_bdevs": 2, 00:25:03.839 "num_base_bdevs_discovered": 2, 00:25:03.839 "num_base_bdevs_operational": 2, 00:25:03.839 "base_bdevs_list": [ 00:25:03.839 { 00:25:03.839 "name": "spare", 00:25:03.839 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:03.839 "is_configured": true, 00:25:03.839 "data_offset": 2048, 00:25:03.839 "data_size": 63488 00:25:03.839 }, 00:25:03.839 { 00:25:03.839 "name": "BaseBdev2", 00:25:03.839 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:03.839 "is_configured": true, 00:25:03.839 "data_offset": 2048, 00:25:03.839 "data_size": 63488 00:25:03.839 } 00:25:03.839 ] 00:25:03.839 }' 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.839 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:04.516 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:04.516 "name": "raid_bdev1", 00:25:04.516 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:04.516 "strip_size_kb": 0, 00:25:04.516 "state": "online", 00:25:04.516 "raid_level": "raid1", 00:25:04.516 "superblock": true, 00:25:04.516 "num_base_bdevs": 2, 00:25:04.516 "num_base_bdevs_discovered": 2, 00:25:04.516 "num_base_bdevs_operational": 2, 00:25:04.516 "base_bdevs_list": [ 00:25:04.516 { 00:25:04.516 "name": "spare", 00:25:04.516 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:04.516 "is_configured": true, 00:25:04.516 "data_offset": 2048, 00:25:04.516 "data_size": 63488 00:25:04.516 }, 00:25:04.516 { 00:25:04.516 "name": "BaseBdev2", 00:25:04.516 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:04.516 "is_configured": true, 00:25:04.516 "data_offset": 2048, 00:25:04.516 "data_size": 63488 00:25:04.516 } 00:25:04.516 ] 00:25:04.516 }' 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.517 [2024-10-07 07:46:03.983870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:04.517 07:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.517 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:04.517 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.517 "name": "raid_bdev1", 00:25:04.517 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:04.517 "strip_size_kb": 0, 00:25:04.517 "state": "online", 00:25:04.517 "raid_level": "raid1", 00:25:04.517 "superblock": true, 00:25:04.517 "num_base_bdevs": 2, 00:25:04.517 "num_base_bdevs_discovered": 1, 00:25:04.517 "num_base_bdevs_operational": 1, 00:25:04.517 "base_bdevs_list": [ 00:25:04.517 { 00:25:04.517 "name": null, 00:25:04.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.517 "is_configured": false, 00:25:04.517 "data_offset": 0, 00:25:04.517 "data_size": 63488 00:25:04.517 }, 00:25:04.517 { 00:25:04.517 "name": "BaseBdev2", 00:25:04.517 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:04.517 "is_configured": true, 00:25:04.517 "data_offset": 2048, 00:25:04.517 "data_size": 63488 00:25:04.517 } 00:25:04.517 ] 00:25:04.517 }' 00:25:04.517 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.517 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.084 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:05.084 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:05.084 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.084 [2024-10-07 07:46:04.460097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.084 [2024-10-07 07:46:04.460455] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:05.084 [2024-10-07 07:46:04.460482] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:05.084 [2024-10-07 07:46:04.460547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.084 [2024-10-07 07:46:04.478462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:25:05.084 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:05.084 07:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:05.084 [2024-10-07 07:46:04.480846] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:06.020 "name": "raid_bdev1", 00:25:06.020 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:06.020 "strip_size_kb": 0, 00:25:06.020 "state": "online", 00:25:06.020 "raid_level": "raid1", 00:25:06.020 "superblock": true, 00:25:06.020 "num_base_bdevs": 2, 00:25:06.020 "num_base_bdevs_discovered": 2, 00:25:06.020 "num_base_bdevs_operational": 2, 00:25:06.020 "process": { 00:25:06.020 "type": "rebuild", 00:25:06.020 "target": "spare", 00:25:06.020 "progress": { 00:25:06.020 "blocks": 20480, 00:25:06.020 "percent": 32 00:25:06.020 } 00:25:06.020 }, 00:25:06.020 "base_bdevs_list": [ 00:25:06.020 { 00:25:06.020 "name": "spare", 00:25:06.020 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:06.020 "is_configured": true, 00:25:06.020 "data_offset": 2048, 00:25:06.020 "data_size": 63488 00:25:06.020 }, 00:25:06.020 { 00:25:06.020 "name": "BaseBdev2", 00:25:06.020 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:06.020 "is_configured": true, 00:25:06.020 "data_offset": 2048, 00:25:06.020 "data_size": 63488 00:25:06.020 } 00:25:06.020 ] 00:25:06.020 }' 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.020 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.280 [2024-10-07 07:46:05.626334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.280 [2024-10-07 07:46:05.688633] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:06.280 [2024-10-07 07:46:05.688697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.280 [2024-10-07 07:46:05.688738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:06.280 [2024-10-07 07:46:05.688747] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:06.280 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.280 "name": "raid_bdev1", 00:25:06.280 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:06.280 "strip_size_kb": 0, 00:25:06.280 "state": "online", 00:25:06.280 "raid_level": "raid1", 00:25:06.280 "superblock": true, 00:25:06.280 "num_base_bdevs": 2, 00:25:06.280 "num_base_bdevs_discovered": 1, 00:25:06.280 "num_base_bdevs_operational": 1, 00:25:06.280 "base_bdevs_list": [ 00:25:06.280 { 00:25:06.280 "name": null, 00:25:06.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.280 "is_configured": false, 00:25:06.281 "data_offset": 0, 00:25:06.281 "data_size": 63488 00:25:06.281 }, 00:25:06.281 { 00:25:06.281 "name": "BaseBdev2", 00:25:06.281 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:06.281 "is_configured": true, 00:25:06.281 "data_offset": 2048, 00:25:06.281 "data_size": 63488 00:25:06.281 } 00:25:06.281 ] 00:25:06.281 }' 00:25:06.281 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.281 07:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.849 07:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:06.849 07:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:06.849 07:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.849 [2024-10-07 07:46:06.147461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:06.849 [2024-10-07 07:46:06.147536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.849 [2024-10-07 07:46:06.147565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:06.849 [2024-10-07 07:46:06.147579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.849 [2024-10-07 07:46:06.148161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.849 [2024-10-07 07:46:06.148194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:06.849 [2024-10-07 07:46:06.148303] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:06.849 [2024-10-07 07:46:06.148318] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:06.849 [2024-10-07 07:46:06.148334] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:06.849 [2024-10-07 07:46:06.148366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:06.849 spare 00:25:06.849 [2024-10-07 07:46:06.165444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:25:06.849 07:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:06.849 07:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:06.849 [2024-10-07 07:46:06.167833] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:07.786 "name": "raid_bdev1", 00:25:07.786 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:07.786 "strip_size_kb": 0, 00:25:07.786 "state": "online", 00:25:07.786 "raid_level": "raid1", 00:25:07.786 "superblock": true, 00:25:07.786 "num_base_bdevs": 2, 00:25:07.786 "num_base_bdevs_discovered": 2, 00:25:07.786 "num_base_bdevs_operational": 2, 00:25:07.786 "process": { 00:25:07.786 "type": "rebuild", 00:25:07.786 "target": "spare", 00:25:07.786 "progress": { 00:25:07.786 "blocks": 20480, 00:25:07.786 "percent": 32 00:25:07.786 } 00:25:07.786 }, 00:25:07.786 "base_bdevs_list": [ 00:25:07.786 { 00:25:07.786 "name": "spare", 00:25:07.786 "uuid": "5f47c73b-7cfe-5da2-9ce6-8b69da7b6ef8", 00:25:07.786 "is_configured": true, 00:25:07.786 "data_offset": 2048, 00:25:07.786 "data_size": 63488 00:25:07.786 }, 00:25:07.786 { 00:25:07.786 "name": "BaseBdev2", 00:25:07.786 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:07.786 "is_configured": true, 00:25:07.786 "data_offset": 2048, 00:25:07.786 "data_size": 63488 00:25:07.786 } 00:25:07.786 ] 00:25:07.786 }' 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:07.786 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:07.787 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:07.787 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:07.787 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.787 [2024-10-07 07:46:07.317181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:08.046 [2024-10-07 07:46:07.375920] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:08.046 [2024-10-07 07:46:07.376167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.046 [2024-10-07 07:46:07.376280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:08.046 [2024-10-07 07:46:07.376309] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.046 "name": "raid_bdev1", 00:25:08.046 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:08.046 "strip_size_kb": 0, 00:25:08.046 "state": "online", 00:25:08.046 "raid_level": "raid1", 00:25:08.046 "superblock": true, 00:25:08.046 "num_base_bdevs": 2, 00:25:08.046 "num_base_bdevs_discovered": 1, 00:25:08.046 "num_base_bdevs_operational": 1, 00:25:08.046 "base_bdevs_list": [ 00:25:08.046 { 00:25:08.046 "name": null, 00:25:08.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.046 "is_configured": false, 00:25:08.046 "data_offset": 0, 00:25:08.046 "data_size": 63488 00:25:08.046 }, 00:25:08.046 { 00:25:08.046 "name": "BaseBdev2", 00:25:08.046 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:08.046 "is_configured": true, 00:25:08.046 "data_offset": 2048, 00:25:08.046 "data_size": 63488 00:25:08.046 } 00:25:08.046 ] 00:25:08.046 }' 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.046 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:08.619 "name": "raid_bdev1", 00:25:08.619 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:08.619 "strip_size_kb": 0, 00:25:08.619 "state": "online", 00:25:08.619 "raid_level": "raid1", 00:25:08.619 "superblock": true, 00:25:08.619 "num_base_bdevs": 2, 00:25:08.619 "num_base_bdevs_discovered": 1, 00:25:08.619 "num_base_bdevs_operational": 1, 00:25:08.619 "base_bdevs_list": [ 00:25:08.619 { 00:25:08.619 "name": null, 00:25:08.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.619 "is_configured": false, 00:25:08.619 "data_offset": 0, 00:25:08.619 "data_size": 63488 00:25:08.619 }, 00:25:08.619 { 00:25:08.619 "name": "BaseBdev2", 00:25:08.619 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:08.619 "is_configured": true, 00:25:08.619 "data_offset": 2048, 00:25:08.619 "data_size": 63488 00:25:08.619 } 00:25:08.619 ] 00:25:08.619 }' 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:08.619 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:08.620 07:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.620 [2024-10-07 07:46:08.021829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:08.620 [2024-10-07 07:46:08.021892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.620 [2024-10-07 07:46:08.021917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:08.620 [2024-10-07 07:46:08.021934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.620 [2024-10-07 07:46:08.022417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.620 [2024-10-07 07:46:08.022457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:08.620 [2024-10-07 07:46:08.022546] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:08.620 [2024-10-07 07:46:08.022567] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:08.620 [2024-10-07 07:46:08.022578] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:08.620 [2024-10-07 07:46:08.022600] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:08.620 BaseBdev1 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:08.620 07:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.560 "name": "raid_bdev1", 00:25:09.560 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:09.560 "strip_size_kb": 0, 00:25:09.560 "state": "online", 00:25:09.560 "raid_level": "raid1", 00:25:09.560 "superblock": true, 00:25:09.560 "num_base_bdevs": 2, 00:25:09.560 "num_base_bdevs_discovered": 1, 00:25:09.560 "num_base_bdevs_operational": 1, 00:25:09.560 "base_bdevs_list": [ 00:25:09.560 { 00:25:09.560 "name": null, 00:25:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.560 "is_configured": false, 00:25:09.560 "data_offset": 0, 00:25:09.560 "data_size": 63488 00:25:09.560 }, 00:25:09.560 { 00:25:09.560 "name": "BaseBdev2", 00:25:09.560 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:09.560 "is_configured": true, 00:25:09.560 "data_offset": 2048, 00:25:09.560 "data_size": 63488 00:25:09.560 } 00:25:09.560 ] 00:25:09.560 }' 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.560 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:10.129 "name": "raid_bdev1", 00:25:10.129 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:10.129 "strip_size_kb": 0, 00:25:10.129 "state": "online", 00:25:10.129 "raid_level": "raid1", 00:25:10.129 "superblock": true, 00:25:10.129 "num_base_bdevs": 2, 00:25:10.129 "num_base_bdevs_discovered": 1, 00:25:10.129 "num_base_bdevs_operational": 1, 00:25:10.129 "base_bdevs_list": [ 00:25:10.129 { 00:25:10.129 "name": null, 00:25:10.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.129 "is_configured": false, 00:25:10.129 "data_offset": 0, 00:25:10.129 "data_size": 63488 00:25:10.129 }, 00:25:10.129 { 00:25:10.129 "name": "BaseBdev2", 00:25:10.129 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:10.129 "is_configured": true, 00:25:10.129 "data_offset": 2048, 00:25:10.129 "data_size": 63488 00:25:10.129 } 00:25:10.129 ] 00:25:10.129 }' 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # local es=0 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.129 [2024-10-07 07:46:09.634449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.129 [2024-10-07 07:46:09.634768] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:10.129 [2024-10-07 07:46:09.634880] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:10.129 request: 00:25:10.129 { 00:25:10.129 "base_bdev": "BaseBdev1", 00:25:10.129 "raid_bdev": "raid_bdev1", 00:25:10.129 "method": "bdev_raid_add_base_bdev", 00:25:10.129 "req_id": 1 00:25:10.129 } 00:25:10.129 Got JSON-RPC error response 00:25:10.129 response: 00:25:10.129 { 00:25:10.129 "code": -22, 00:25:10.129 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:10.129 } 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@656 -- # es=1 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:10.129 07:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:11.504 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.504 "name": "raid_bdev1", 00:25:11.504 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:11.504 "strip_size_kb": 0, 00:25:11.504 "state": "online", 00:25:11.504 "raid_level": "raid1", 00:25:11.504 "superblock": true, 00:25:11.504 "num_base_bdevs": 2, 00:25:11.504 "num_base_bdevs_discovered": 1, 00:25:11.504 "num_base_bdevs_operational": 1, 00:25:11.504 "base_bdevs_list": [ 00:25:11.505 { 00:25:11.505 "name": null, 00:25:11.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.505 "is_configured": false, 00:25:11.505 "data_offset": 0, 00:25:11.505 "data_size": 63488 00:25:11.505 }, 00:25:11.505 { 00:25:11.505 "name": "BaseBdev2", 00:25:11.505 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:11.505 "is_configured": true, 00:25:11.505 "data_offset": 2048, 00:25:11.505 "data_size": 63488 00:25:11.505 } 00:25:11.505 ] 00:25:11.505 }' 00:25:11.505 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.505 07:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:11.762 "name": "raid_bdev1", 00:25:11.762 "uuid": "62e80b85-31b1-4bed-9e61-6e0682319a3c", 00:25:11.762 "strip_size_kb": 0, 00:25:11.762 "state": "online", 00:25:11.762 "raid_level": "raid1", 00:25:11.762 "superblock": true, 00:25:11.762 "num_base_bdevs": 2, 00:25:11.762 "num_base_bdevs_discovered": 1, 00:25:11.762 "num_base_bdevs_operational": 1, 00:25:11.762 "base_bdevs_list": [ 00:25:11.762 { 00:25:11.762 "name": null, 00:25:11.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.762 "is_configured": false, 00:25:11.762 "data_offset": 0, 00:25:11.762 "data_size": 63488 00:25:11.762 }, 00:25:11.762 { 00:25:11.762 "name": "BaseBdev2", 00:25:11.762 "uuid": "28c637ff-42c3-5799-ba82-7717f84dc7f3", 00:25:11.762 "is_configured": true, 00:25:11.762 "data_offset": 2048, 00:25:11.762 "data_size": 63488 00:25:11.762 } 00:25:11.762 ] 00:25:11.762 }' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77066 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' -z 77066 ']' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # kill -0 77066 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # uname 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 77066 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:11.762 killing process with pid 77066 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # echo 'killing process with pid 77066' 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # kill 77066 00:25:11.762 Received shutdown signal, test time was about 17.280008 seconds 00:25:11.762 00:25:11.762 Latency(us) 00:25:11.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.762 =================================================================================================================== 00:25:11.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.762 [2024-10-07 07:46:11.272272] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:11.762 07:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@977 -- # wait 77066 00:25:11.762 [2024-10-07 07:46:11.272410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.762 [2024-10-07 07:46:11.272477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:11.762 [2024-10-07 07:46:11.272496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:12.021 [2024-10-07 07:46:11.516641] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:13.399 ************************************ 00:25:13.399 END TEST raid_rebuild_test_sb_io 00:25:13.399 ************************************ 00:25:13.399 07:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:25:13.399 00:25:13.399 real 0m20.750s 00:25:13.399 user 0m27.083s 00:25:13.399 sys 0m2.369s 00:25:13.399 07:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:13.399 07:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:13.659 07:46:12 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:25:13.659 07:46:12 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:25:13.659 07:46:12 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:25:13.659 07:46:12 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:13.659 07:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:13.659 ************************************ 00:25:13.659 START TEST raid_rebuild_test 00:25:13.659 ************************************ 00:25:13.659 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 4 false false true 00:25:13.659 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:13.659 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:25:13.659 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:25:13.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77760 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77760 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # '[' -z 77760 ']' 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:13.660 07:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.660 [2024-10-07 07:46:13.089423] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:13.660 [2024-10-07 07:46:13.089792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77760 ] 00:25:13.660 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:13.660 Zero copy mechanism will not be used. 00:25:13.920 [2024-10-07 07:46:13.255697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.180 [2024-10-07 07:46:13.484933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.180 [2024-10-07 07:46:13.707657] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.180 [2024-10-07 07:46:13.707908] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # return 0 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 BaseBdev1_malloc 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:14.815 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.117814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:14.816 [2024-10-07 07:46:14.118044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.816 [2024-10-07 07:46:14.118111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:14.816 [2024-10-07 07:46:14.118339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.816 [2024-10-07 07:46:14.120954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.816 [2024-10-07 07:46:14.120998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:14.816 BaseBdev1 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 BaseBdev2_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.188211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:14.816 [2024-10-07 07:46:14.188429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.816 [2024-10-07 07:46:14.188495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:14.816 [2024-10-07 07:46:14.188530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.816 [2024-10-07 07:46:14.191250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.816 BaseBdev2 00:25:14.816 [2024-10-07 07:46:14.191412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 BaseBdev3_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.244378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:14.816 [2024-10-07 07:46:14.244587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.816 [2024-10-07 07:46:14.244652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:14.816 [2024-10-07 07:46:14.244781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.816 [2024-10-07 07:46:14.247475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.816 [2024-10-07 07:46:14.247622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:14.816 BaseBdev3 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 BaseBdev4_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.298667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:14.816 [2024-10-07 07:46:14.298733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.816 [2024-10-07 07:46:14.298754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:14.816 [2024-10-07 07:46:14.298769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.816 [2024-10-07 07:46:14.301255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.816 [2024-10-07 07:46:14.301301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:14.816 BaseBdev4 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 spare_malloc 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 spare_delay 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.358422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:14.816 [2024-10-07 07:46:14.358592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.816 [2024-10-07 07:46:14.358620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:14.816 [2024-10-07 07:46:14.358634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.816 [2024-10-07 07:46:14.361216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.816 [2024-10-07 07:46:14.361256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:14.816 spare 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.816 [2024-10-07 07:46:14.366484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:14.816 [2024-10-07 07:46:14.368735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:14.816 [2024-10-07 07:46:14.368939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:14.816 [2024-10-07 07:46:14.369007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:14.816 [2024-10-07 07:46:14.369099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:14.816 [2024-10-07 07:46:14.369114] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:14.816 [2024-10-07 07:46:14.369428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:14.816 [2024-10-07 07:46:14.369607] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:14.816 [2024-10-07 07:46:14.369619] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:14.816 [2024-10-07 07:46:14.369830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.816 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:15.076 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.076 "name": "raid_bdev1", 00:25:15.076 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:15.076 "strip_size_kb": 0, 00:25:15.076 "state": "online", 00:25:15.076 "raid_level": "raid1", 00:25:15.076 "superblock": false, 00:25:15.076 "num_base_bdevs": 4, 00:25:15.076 "num_base_bdevs_discovered": 4, 00:25:15.076 "num_base_bdevs_operational": 4, 00:25:15.076 "base_bdevs_list": [ 00:25:15.076 { 00:25:15.076 "name": "BaseBdev1", 00:25:15.076 "uuid": "4b144db0-13a8-50f8-a484-52dfd0b1d842", 00:25:15.076 "is_configured": true, 00:25:15.076 "data_offset": 0, 00:25:15.076 "data_size": 65536 00:25:15.076 }, 00:25:15.076 { 00:25:15.076 "name": "BaseBdev2", 00:25:15.076 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:15.076 "is_configured": true, 00:25:15.076 "data_offset": 0, 00:25:15.076 "data_size": 65536 00:25:15.077 }, 00:25:15.077 { 00:25:15.077 "name": "BaseBdev3", 00:25:15.077 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:15.077 "is_configured": true, 00:25:15.077 "data_offset": 0, 00:25:15.077 "data_size": 65536 00:25:15.077 }, 00:25:15.077 { 00:25:15.077 "name": "BaseBdev4", 00:25:15.077 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:15.077 "is_configured": true, 00:25:15.077 "data_offset": 0, 00:25:15.077 "data_size": 65536 00:25:15.077 } 00:25:15.077 ] 00:25:15.077 }' 00:25:15.077 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.077 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.336 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:15.336 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:15.336 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:15.336 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.336 [2024-10-07 07:46:14.862925] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.336 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:15.594 07:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:15.853 [2024-10-07 07:46:15.214770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:15.853 /dev/nbd0 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:15.853 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:15.853 1+0 records in 00:25:15.853 1+0 records out 00:25:15.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243475 s, 16.8 MB/s 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:15.854 07:46:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:25:22.423 65536+0 records in 00:25:22.423 65536+0 records out 00:25:22.423 33554432 bytes (34 MB, 32 MiB) copied, 6.01152 s, 5.6 MB/s 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:22.423 [2024-10-07 07:46:21.528248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.423 [2024-10-07 07:46:21.540381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.423 "name": "raid_bdev1", 00:25:22.423 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:22.423 "strip_size_kb": 0, 00:25:22.423 "state": "online", 00:25:22.423 "raid_level": "raid1", 00:25:22.423 "superblock": false, 00:25:22.423 "num_base_bdevs": 4, 00:25:22.423 "num_base_bdevs_discovered": 3, 00:25:22.423 "num_base_bdevs_operational": 3, 00:25:22.423 "base_bdevs_list": [ 00:25:22.423 { 00:25:22.423 "name": null, 00:25:22.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.423 "is_configured": false, 00:25:22.423 "data_offset": 0, 00:25:22.423 "data_size": 65536 00:25:22.423 }, 00:25:22.423 { 00:25:22.423 "name": "BaseBdev2", 00:25:22.423 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:22.423 "is_configured": true, 00:25:22.423 "data_offset": 0, 00:25:22.423 "data_size": 65536 00:25:22.423 }, 00:25:22.423 { 00:25:22.423 "name": "BaseBdev3", 00:25:22.423 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:22.423 "is_configured": true, 00:25:22.423 "data_offset": 0, 00:25:22.423 "data_size": 65536 00:25:22.423 }, 00:25:22.423 { 00:25:22.423 "name": "BaseBdev4", 00:25:22.423 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:22.423 "is_configured": true, 00:25:22.423 "data_offset": 0, 00:25:22.423 "data_size": 65536 00:25:22.423 } 00:25:22.423 ] 00:25:22.423 }' 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.423 [2024-10-07 07:46:21.964479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.423 [2024-10-07 07:46:21.981084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:22.423 07:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:22.683 [2024-10-07 07:46:21.983596] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.621 07:46:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.621 "name": "raid_bdev1", 00:25:23.621 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:23.621 "strip_size_kb": 0, 00:25:23.621 "state": "online", 00:25:23.621 "raid_level": "raid1", 00:25:23.621 "superblock": false, 00:25:23.621 "num_base_bdevs": 4, 00:25:23.621 "num_base_bdevs_discovered": 4, 00:25:23.621 "num_base_bdevs_operational": 4, 00:25:23.621 "process": { 00:25:23.621 "type": "rebuild", 00:25:23.621 "target": "spare", 00:25:23.621 "progress": { 00:25:23.621 "blocks": 20480, 00:25:23.621 "percent": 31 00:25:23.621 } 00:25:23.621 }, 00:25:23.621 "base_bdevs_list": [ 00:25:23.621 { 00:25:23.621 "name": "spare", 00:25:23.621 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:23.621 "is_configured": true, 00:25:23.621 "data_offset": 0, 00:25:23.621 "data_size": 65536 00:25:23.621 }, 00:25:23.621 { 00:25:23.621 "name": "BaseBdev2", 00:25:23.621 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:23.621 "is_configured": true, 00:25:23.621 "data_offset": 0, 00:25:23.621 "data_size": 65536 00:25:23.621 }, 00:25:23.621 { 00:25:23.621 "name": "BaseBdev3", 00:25:23.621 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:23.621 "is_configured": true, 00:25:23.621 "data_offset": 0, 00:25:23.621 "data_size": 65536 00:25:23.621 }, 00:25:23.621 { 00:25:23.621 "name": "BaseBdev4", 00:25:23.621 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:23.621 "is_configured": true, 00:25:23.621 "data_offset": 0, 00:25:23.621 "data_size": 65536 00:25:23.621 } 00:25:23.621 ] 00:25:23.621 }' 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:23.621 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.621 [2024-10-07 07:46:23.125382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:23.881 [2024-10-07 07:46:23.191786] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:23.881 [2024-10-07 07:46:23.191879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.881 [2024-10-07 07:46:23.191900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:23.881 [2024-10-07 07:46:23.191914] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.881 "name": "raid_bdev1", 00:25:23.881 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:23.881 "strip_size_kb": 0, 00:25:23.881 "state": "online", 00:25:23.881 "raid_level": "raid1", 00:25:23.881 "superblock": false, 00:25:23.881 "num_base_bdevs": 4, 00:25:23.881 "num_base_bdevs_discovered": 3, 00:25:23.881 "num_base_bdevs_operational": 3, 00:25:23.881 "base_bdevs_list": [ 00:25:23.881 { 00:25:23.881 "name": null, 00:25:23.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.881 "is_configured": false, 00:25:23.881 "data_offset": 0, 00:25:23.881 "data_size": 65536 00:25:23.881 }, 00:25:23.881 { 00:25:23.881 "name": "BaseBdev2", 00:25:23.881 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:23.881 "is_configured": true, 00:25:23.881 "data_offset": 0, 00:25:23.881 "data_size": 65536 00:25:23.881 }, 00:25:23.881 { 00:25:23.881 "name": "BaseBdev3", 00:25:23.881 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:23.881 "is_configured": true, 00:25:23.881 "data_offset": 0, 00:25:23.881 "data_size": 65536 00:25:23.881 }, 00:25:23.881 { 00:25:23.881 "name": "BaseBdev4", 00:25:23.881 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:23.881 "is_configured": true, 00:25:23.881 "data_offset": 0, 00:25:23.881 "data_size": 65536 00:25:23.881 } 00:25:23.881 ] 00:25:23.881 }' 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.881 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.141 "name": "raid_bdev1", 00:25:24.141 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:24.141 "strip_size_kb": 0, 00:25:24.141 "state": "online", 00:25:24.141 "raid_level": "raid1", 00:25:24.141 "superblock": false, 00:25:24.141 "num_base_bdevs": 4, 00:25:24.141 "num_base_bdevs_discovered": 3, 00:25:24.141 "num_base_bdevs_operational": 3, 00:25:24.141 "base_bdevs_list": [ 00:25:24.141 { 00:25:24.141 "name": null, 00:25:24.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.141 "is_configured": false, 00:25:24.141 "data_offset": 0, 00:25:24.141 "data_size": 65536 00:25:24.141 }, 00:25:24.141 { 00:25:24.141 "name": "BaseBdev2", 00:25:24.141 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:24.141 "is_configured": true, 00:25:24.141 "data_offset": 0, 00:25:24.141 "data_size": 65536 00:25:24.141 }, 00:25:24.141 { 00:25:24.141 "name": "BaseBdev3", 00:25:24.141 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:24.141 "is_configured": true, 00:25:24.141 "data_offset": 0, 00:25:24.141 "data_size": 65536 00:25:24.141 }, 00:25:24.141 { 00:25:24.141 "name": "BaseBdev4", 00:25:24.141 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:24.141 "is_configured": true, 00:25:24.141 "data_offset": 0, 00:25:24.141 "data_size": 65536 00:25:24.141 } 00:25:24.141 ] 00:25:24.141 }' 00:25:24.141 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.401 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:24.401 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 [2024-10-07 07:46:23.746485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:24.402 [2024-10-07 07:46:23.762065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:24.402 07:46:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:24.402 [2024-10-07 07:46:23.764497] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.341 "name": "raid_bdev1", 00:25:25.341 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:25.341 "strip_size_kb": 0, 00:25:25.341 "state": "online", 00:25:25.341 "raid_level": "raid1", 00:25:25.341 "superblock": false, 00:25:25.341 "num_base_bdevs": 4, 00:25:25.341 "num_base_bdevs_discovered": 4, 00:25:25.341 "num_base_bdevs_operational": 4, 00:25:25.341 "process": { 00:25:25.341 "type": "rebuild", 00:25:25.341 "target": "spare", 00:25:25.341 "progress": { 00:25:25.341 "blocks": 20480, 00:25:25.341 "percent": 31 00:25:25.341 } 00:25:25.341 }, 00:25:25.341 "base_bdevs_list": [ 00:25:25.341 { 00:25:25.341 "name": "spare", 00:25:25.341 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:25.341 "is_configured": true, 00:25:25.341 "data_offset": 0, 00:25:25.341 "data_size": 65536 00:25:25.341 }, 00:25:25.341 { 00:25:25.341 "name": "BaseBdev2", 00:25:25.341 "uuid": "74bc4d6d-553a-5c96-b6e4-78635e7e6a0c", 00:25:25.341 "is_configured": true, 00:25:25.341 "data_offset": 0, 00:25:25.341 "data_size": 65536 00:25:25.341 }, 00:25:25.341 { 00:25:25.341 "name": "BaseBdev3", 00:25:25.341 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:25.341 "is_configured": true, 00:25:25.341 "data_offset": 0, 00:25:25.341 "data_size": 65536 00:25:25.341 }, 00:25:25.341 { 00:25:25.341 "name": "BaseBdev4", 00:25:25.341 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:25.341 "is_configured": true, 00:25:25.341 "data_offset": 0, 00:25:25.341 "data_size": 65536 00:25:25.341 } 00:25:25.341 ] 00:25:25.341 }' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:25.341 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.341 [2024-10-07 07:46:24.890152] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:25.601 [2024-10-07 07:46:24.972552] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.601 07:46:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.601 "name": "raid_bdev1", 00:25:25.601 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:25.601 "strip_size_kb": 0, 00:25:25.601 "state": "online", 00:25:25.601 "raid_level": "raid1", 00:25:25.601 "superblock": false, 00:25:25.601 "num_base_bdevs": 4, 00:25:25.601 "num_base_bdevs_discovered": 3, 00:25:25.601 "num_base_bdevs_operational": 3, 00:25:25.601 "process": { 00:25:25.601 "type": "rebuild", 00:25:25.601 "target": "spare", 00:25:25.601 "progress": { 00:25:25.601 "blocks": 24576, 00:25:25.601 "percent": 37 00:25:25.601 } 00:25:25.601 }, 00:25:25.601 "base_bdevs_list": [ 00:25:25.601 { 00:25:25.601 "name": "spare", 00:25:25.601 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:25.601 "is_configured": true, 00:25:25.601 "data_offset": 0, 00:25:25.601 "data_size": 65536 00:25:25.601 }, 00:25:25.601 { 00:25:25.601 "name": null, 00:25:25.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.601 "is_configured": false, 00:25:25.601 "data_offset": 0, 00:25:25.601 "data_size": 65536 00:25:25.601 }, 00:25:25.601 { 00:25:25.601 "name": "BaseBdev3", 00:25:25.601 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:25.601 "is_configured": true, 00:25:25.601 "data_offset": 0, 00:25:25.601 "data_size": 65536 00:25:25.601 }, 00:25:25.601 { 00:25:25.601 "name": "BaseBdev4", 00:25:25.601 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:25.601 "is_configured": true, 00:25:25.601 "data_offset": 0, 00:25:25.601 "data_size": 65536 00:25:25.601 } 00:25:25.601 ] 00:25:25.601 }' 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.601 07:46:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.861 "name": "raid_bdev1", 00:25:25.861 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:25.861 "strip_size_kb": 0, 00:25:25.861 "state": "online", 00:25:25.861 "raid_level": "raid1", 00:25:25.861 "superblock": false, 00:25:25.861 "num_base_bdevs": 4, 00:25:25.861 "num_base_bdevs_discovered": 3, 00:25:25.861 "num_base_bdevs_operational": 3, 00:25:25.861 "process": { 00:25:25.861 "type": "rebuild", 00:25:25.861 "target": "spare", 00:25:25.861 "progress": { 00:25:25.861 "blocks": 26624, 00:25:25.861 "percent": 40 00:25:25.861 } 00:25:25.861 }, 00:25:25.861 "base_bdevs_list": [ 00:25:25.861 { 00:25:25.861 "name": "spare", 00:25:25.861 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:25.861 "is_configured": true, 00:25:25.861 "data_offset": 0, 00:25:25.861 "data_size": 65536 00:25:25.861 }, 00:25:25.861 { 00:25:25.861 "name": null, 00:25:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.861 "is_configured": false, 00:25:25.861 "data_offset": 0, 00:25:25.861 "data_size": 65536 00:25:25.861 }, 00:25:25.861 { 00:25:25.861 "name": "BaseBdev3", 00:25:25.861 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:25.861 "is_configured": true, 00:25:25.861 "data_offset": 0, 00:25:25.861 "data_size": 65536 00:25:25.861 }, 00:25:25.861 { 00:25:25.861 "name": "BaseBdev4", 00:25:25.861 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:25.861 "is_configured": true, 00:25:25.861 "data_offset": 0, 00:25:25.861 "data_size": 65536 00:25:25.861 } 00:25:25.861 ] 00:25:25.861 }' 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.861 07:46:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.801 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.802 "name": "raid_bdev1", 00:25:26.802 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:26.802 "strip_size_kb": 0, 00:25:26.802 "state": "online", 00:25:26.802 "raid_level": "raid1", 00:25:26.802 "superblock": false, 00:25:26.802 "num_base_bdevs": 4, 00:25:26.802 "num_base_bdevs_discovered": 3, 00:25:26.802 "num_base_bdevs_operational": 3, 00:25:26.802 "process": { 00:25:26.802 "type": "rebuild", 00:25:26.802 "target": "spare", 00:25:26.802 "progress": { 00:25:26.802 "blocks": 51200, 00:25:26.802 "percent": 78 00:25:26.802 } 00:25:26.802 }, 00:25:26.802 "base_bdevs_list": [ 00:25:26.802 { 00:25:26.802 "name": "spare", 00:25:26.802 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:26.802 "is_configured": true, 00:25:26.802 "data_offset": 0, 00:25:26.802 "data_size": 65536 00:25:26.802 }, 00:25:26.802 { 00:25:26.802 "name": null, 00:25:26.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.802 "is_configured": false, 00:25:26.802 "data_offset": 0, 00:25:26.802 "data_size": 65536 00:25:26.802 }, 00:25:26.802 { 00:25:26.802 "name": "BaseBdev3", 00:25:26.802 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:26.802 "is_configured": true, 00:25:26.802 "data_offset": 0, 00:25:26.802 "data_size": 65536 00:25:26.802 }, 00:25:26.802 { 00:25:26.802 "name": "BaseBdev4", 00:25:26.802 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:26.802 "is_configured": true, 00:25:26.802 "data_offset": 0, 00:25:26.802 "data_size": 65536 00:25:26.802 } 00:25:26.802 ] 00:25:26.802 }' 00:25:26.802 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.063 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:27.063 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.063 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.063 07:46:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:27.630 [2024-10-07 07:46:26.986070] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:27.630 [2024-10-07 07:46:26.986172] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:27.630 [2024-10-07 07:46:26.986230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:27.889 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:28.148 "name": "raid_bdev1", 00:25:28.148 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:28.148 "strip_size_kb": 0, 00:25:28.148 "state": "online", 00:25:28.148 "raid_level": "raid1", 00:25:28.148 "superblock": false, 00:25:28.148 "num_base_bdevs": 4, 00:25:28.148 "num_base_bdevs_discovered": 3, 00:25:28.148 "num_base_bdevs_operational": 3, 00:25:28.148 "base_bdevs_list": [ 00:25:28.148 { 00:25:28.148 "name": "spare", 00:25:28.148 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": null, 00:25:28.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.148 "is_configured": false, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": "BaseBdev3", 00:25:28.148 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": "BaseBdev4", 00:25:28.148 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 } 00:25:28.148 ] 00:25:28.148 }' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:28.148 "name": "raid_bdev1", 00:25:28.148 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:28.148 "strip_size_kb": 0, 00:25:28.148 "state": "online", 00:25:28.148 "raid_level": "raid1", 00:25:28.148 "superblock": false, 00:25:28.148 "num_base_bdevs": 4, 00:25:28.148 "num_base_bdevs_discovered": 3, 00:25:28.148 "num_base_bdevs_operational": 3, 00:25:28.148 "base_bdevs_list": [ 00:25:28.148 { 00:25:28.148 "name": "spare", 00:25:28.148 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": null, 00:25:28.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.148 "is_configured": false, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": "BaseBdev3", 00:25:28.148 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 }, 00:25:28.148 { 00:25:28.148 "name": "BaseBdev4", 00:25:28.148 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:28.148 "is_configured": true, 00:25:28.148 "data_offset": 0, 00:25:28.148 "data_size": 65536 00:25:28.148 } 00:25:28.148 ] 00:25:28.148 }' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:28.148 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.409 "name": "raid_bdev1", 00:25:28.409 "uuid": "a84b57f9-ef46-4880-9f3b-37350d015b79", 00:25:28.409 "strip_size_kb": 0, 00:25:28.409 "state": "online", 00:25:28.409 "raid_level": "raid1", 00:25:28.409 "superblock": false, 00:25:28.409 "num_base_bdevs": 4, 00:25:28.409 "num_base_bdevs_discovered": 3, 00:25:28.409 "num_base_bdevs_operational": 3, 00:25:28.409 "base_bdevs_list": [ 00:25:28.409 { 00:25:28.409 "name": "spare", 00:25:28.409 "uuid": "224571f6-3838-53bf-bc80-232c3824d2ff", 00:25:28.409 "is_configured": true, 00:25:28.409 "data_offset": 0, 00:25:28.409 "data_size": 65536 00:25:28.409 }, 00:25:28.409 { 00:25:28.409 "name": null, 00:25:28.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.409 "is_configured": false, 00:25:28.409 "data_offset": 0, 00:25:28.409 "data_size": 65536 00:25:28.409 }, 00:25:28.409 { 00:25:28.409 "name": "BaseBdev3", 00:25:28.409 "uuid": "403d115d-a90a-5de0-a8ec-e8373c064a7b", 00:25:28.409 "is_configured": true, 00:25:28.409 "data_offset": 0, 00:25:28.409 "data_size": 65536 00:25:28.409 }, 00:25:28.409 { 00:25:28.409 "name": "BaseBdev4", 00:25:28.409 "uuid": "83420be0-fe70-5945-bbe0-fc61ceddd7c4", 00:25:28.409 "is_configured": true, 00:25:28.409 "data_offset": 0, 00:25:28.409 "data_size": 65536 00:25:28.409 } 00:25:28.409 ] 00:25:28.409 }' 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.409 07:46:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.669 [2024-10-07 07:46:28.192799] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.669 [2024-10-07 07:46:28.192959] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.669 [2024-10-07 07:46:28.193067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.669 [2024-10-07 07:46:28.193157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.669 [2024-10-07 07:46:28.193170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.669 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:28.928 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:29.186 /dev/nbd0 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:25:29.186 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.187 1+0 records in 00:25:29.187 1+0 records out 00:25:29.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310806 s, 13.2 MB/s 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:29.187 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:29.498 /dev/nbd1 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # break 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.498 1+0 records in 00:25:29.498 1+0 records out 00:25:29.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495287 s, 8.3 MB/s 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.498 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:29.499 07:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:25:29.499 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:29.499 07:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:29.499 07:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.758 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:30.017 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:30.017 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77760 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' -z 77760 ']' 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # kill -0 77760 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # uname 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 77760 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:30.018 killing process with pid 77760 00:25:30.018 Received shutdown signal, test time was about 60.000000 seconds 00:25:30.018 00:25:30.018 Latency(us) 00:25:30.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.018 =================================================================================================================== 00:25:30.018 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 77760' 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # kill 77760 00:25:30.018 [2024-10-07 07:46:29.555433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.018 07:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@977 -- # wait 77760 00:25:30.587 [2024-10-07 07:46:30.107114] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:31.968 07:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:25:31.968 00:25:31.968 real 0m18.502s 00:25:31.968 user 0m20.907s 00:25:31.968 sys 0m3.445s 00:25:31.968 ************************************ 00:25:31.968 END TEST raid_rebuild_test 00:25:31.968 ************************************ 00:25:31.968 07:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:31.968 07:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.228 07:46:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:25:32.228 07:46:31 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:25:32.228 07:46:31 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:32.228 07:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.228 ************************************ 00:25:32.228 START TEST raid_rebuild_test_sb 00:25:32.228 ************************************ 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 4 true false true 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78212 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78212 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # '[' -z 78212 ']' 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:32.228 07:46:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.228 [2024-10-07 07:46:31.680408] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:32.228 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:32.228 Zero copy mechanism will not be used. 00:25:32.228 [2024-10-07 07:46:31.680829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78212 ] 00:25:32.488 [2024-10-07 07:46:31.866119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.748 [2024-10-07 07:46:32.097466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.008 [2024-10-07 07:46:32.326493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.008 [2024-10-07 07:46:32.326547] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # return 0 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.268 BaseBdev1_malloc 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.268 [2024-10-07 07:46:32.718262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:33.268 [2024-10-07 07:46:32.718750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.268 [2024-10-07 07:46:32.718793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:33.268 [2024-10-07 07:46:32.718814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.268 [2024-10-07 07:46:32.721606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.268 [2024-10-07 07:46:32.721652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:33.268 BaseBdev1 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.268 BaseBdev2_malloc 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.268 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.268 [2024-10-07 07:46:32.789595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:33.268 [2024-10-07 07:46:32.789819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.268 [2024-10-07 07:46:32.789884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:33.268 [2024-10-07 07:46:32.789988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.269 [2024-10-07 07:46:32.792752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.269 [2024-10-07 07:46:32.792796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:33.269 BaseBdev2 00:25:33.269 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.269 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:33.269 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:33.269 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.269 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 BaseBdev3_malloc 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 [2024-10-07 07:46:32.844405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:33.529 [2024-10-07 07:46:32.844634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.529 [2024-10-07 07:46:32.844716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:33.529 [2024-10-07 07:46:32.844809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.529 [2024-10-07 07:46:32.847469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.529 [2024-10-07 07:46:32.847632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:33.529 BaseBdev3 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 BaseBdev4_malloc 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 [2024-10-07 07:46:32.901600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:33.529 [2024-10-07 07:46:32.901666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.529 [2024-10-07 07:46:32.901690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:33.529 [2024-10-07 07:46:32.901714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.529 [2024-10-07 07:46:32.904294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.529 BaseBdev4 00:25:33.529 [2024-10-07 07:46:32.904482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 spare_malloc 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 spare_delay 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.529 [2024-10-07 07:46:32.969251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:33.529 [2024-10-07 07:46:32.969443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.529 [2024-10-07 07:46:32.969504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:33.529 [2024-10-07 07:46:32.969589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.529 [2024-10-07 07:46:32.972097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.529 [2024-10-07 07:46:32.972255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:33.529 spare 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.529 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.530 [2024-10-07 07:46:32.981320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.530 [2024-10-07 07:46:32.983751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.530 [2024-10-07 07:46:32.983950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:33.530 [2024-10-07 07:46:32.984119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:33.530 [2024-10-07 07:46:32.984363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:33.530 [2024-10-07 07:46:32.984416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:33.530 [2024-10-07 07:46:32.984903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:33.530 [2024-10-07 07:46:32.985094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:33.530 [2024-10-07 07:46:32.985108] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:33.530 [2024-10-07 07:46:32.985343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.530 07:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.530 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:33.530 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.530 "name": "raid_bdev1", 00:25:33.530 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:33.530 "strip_size_kb": 0, 00:25:33.530 "state": "online", 00:25:33.530 "raid_level": "raid1", 00:25:33.530 "superblock": true, 00:25:33.530 "num_base_bdevs": 4, 00:25:33.530 "num_base_bdevs_discovered": 4, 00:25:33.530 "num_base_bdevs_operational": 4, 00:25:33.530 "base_bdevs_list": [ 00:25:33.530 { 00:25:33.530 "name": "BaseBdev1", 00:25:33.530 "uuid": "c4a9e0c3-eeea-5d87-9e94-af03d24e2e62", 00:25:33.530 "is_configured": true, 00:25:33.530 "data_offset": 2048, 00:25:33.530 "data_size": 63488 00:25:33.530 }, 00:25:33.530 { 00:25:33.530 "name": "BaseBdev2", 00:25:33.530 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:33.530 "is_configured": true, 00:25:33.530 "data_offset": 2048, 00:25:33.530 "data_size": 63488 00:25:33.530 }, 00:25:33.530 { 00:25:33.530 "name": "BaseBdev3", 00:25:33.530 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:33.530 "is_configured": true, 00:25:33.530 "data_offset": 2048, 00:25:33.530 "data_size": 63488 00:25:33.530 }, 00:25:33.530 { 00:25:33.530 "name": "BaseBdev4", 00:25:33.530 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:33.530 "is_configured": true, 00:25:33.530 "data_offset": 2048, 00:25:33.530 "data_size": 63488 00:25:33.530 } 00:25:33.530 ] 00:25:33.530 }' 00:25:33.530 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.530 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:34.099 [2024-10-07 07:46:33.525781] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:34.099 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:34.668 [2024-10-07 07:46:33.921596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:34.668 /dev/nbd0 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:34.669 1+0 records in 00:25:34.669 1+0 records out 00:25:34.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237492 s, 17.2 MB/s 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:34.669 07:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:41.310 63488+0 records in 00:25:41.310 63488+0 records out 00:25:41.310 32505856 bytes (33 MB, 31 MiB) copied, 5.85018 s, 5.6 MB/s 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:41.310 07:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:41.310 [2024-10-07 07:46:40.079115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.310 [2024-10-07 07:46:40.092560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:41.310 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.311 "name": "raid_bdev1", 00:25:41.311 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:41.311 "strip_size_kb": 0, 00:25:41.311 "state": "online", 00:25:41.311 "raid_level": "raid1", 00:25:41.311 "superblock": true, 00:25:41.311 "num_base_bdevs": 4, 00:25:41.311 "num_base_bdevs_discovered": 3, 00:25:41.311 "num_base_bdevs_operational": 3, 00:25:41.311 "base_bdevs_list": [ 00:25:41.311 { 00:25:41.311 "name": null, 00:25:41.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.311 "is_configured": false, 00:25:41.311 "data_offset": 0, 00:25:41.311 "data_size": 63488 00:25:41.311 }, 00:25:41.311 { 00:25:41.311 "name": "BaseBdev2", 00:25:41.311 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:41.311 "is_configured": true, 00:25:41.311 "data_offset": 2048, 00:25:41.311 "data_size": 63488 00:25:41.311 }, 00:25:41.311 { 00:25:41.311 "name": "BaseBdev3", 00:25:41.311 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:41.311 "is_configured": true, 00:25:41.311 "data_offset": 2048, 00:25:41.311 "data_size": 63488 00:25:41.311 }, 00:25:41.311 { 00:25:41.311 "name": "BaseBdev4", 00:25:41.311 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:41.311 "is_configured": true, 00:25:41.311 "data_offset": 2048, 00:25:41.311 "data_size": 63488 00:25:41.311 } 00:25:41.311 ] 00:25:41.311 }' 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.311 [2024-10-07 07:46:40.536637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:41.311 [2024-10-07 07:46:40.553261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:41.311 07:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:41.311 [2024-10-07 07:46:40.555511] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:42.248 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:42.249 "name": "raid_bdev1", 00:25:42.249 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:42.249 "strip_size_kb": 0, 00:25:42.249 "state": "online", 00:25:42.249 "raid_level": "raid1", 00:25:42.249 "superblock": true, 00:25:42.249 "num_base_bdevs": 4, 00:25:42.249 "num_base_bdevs_discovered": 4, 00:25:42.249 "num_base_bdevs_operational": 4, 00:25:42.249 "process": { 00:25:42.249 "type": "rebuild", 00:25:42.249 "target": "spare", 00:25:42.249 "progress": { 00:25:42.249 "blocks": 20480, 00:25:42.249 "percent": 32 00:25:42.249 } 00:25:42.249 }, 00:25:42.249 "base_bdevs_list": [ 00:25:42.249 { 00:25:42.249 "name": "spare", 00:25:42.249 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:42.249 "is_configured": true, 00:25:42.249 "data_offset": 2048, 00:25:42.249 "data_size": 63488 00:25:42.249 }, 00:25:42.249 { 00:25:42.249 "name": "BaseBdev2", 00:25:42.249 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:42.249 "is_configured": true, 00:25:42.249 "data_offset": 2048, 00:25:42.249 "data_size": 63488 00:25:42.249 }, 00:25:42.249 { 00:25:42.249 "name": "BaseBdev3", 00:25:42.249 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:42.249 "is_configured": true, 00:25:42.249 "data_offset": 2048, 00:25:42.249 "data_size": 63488 00:25:42.249 }, 00:25:42.249 { 00:25:42.249 "name": "BaseBdev4", 00:25:42.249 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:42.249 "is_configured": true, 00:25:42.249 "data_offset": 2048, 00:25:42.249 "data_size": 63488 00:25:42.249 } 00:25:42.249 ] 00:25:42.249 }' 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.249 [2024-10-07 07:46:41.725122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:42.249 [2024-10-07 07:46:41.763473] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:42.249 [2024-10-07 07:46:41.763748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.249 [2024-10-07 07:46:41.763776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:42.249 [2024-10-07 07:46:41.763791] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:42.249 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.508 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:42.508 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.508 "name": "raid_bdev1", 00:25:42.508 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:42.508 "strip_size_kb": 0, 00:25:42.508 "state": "online", 00:25:42.508 "raid_level": "raid1", 00:25:42.508 "superblock": true, 00:25:42.508 "num_base_bdevs": 4, 00:25:42.508 "num_base_bdevs_discovered": 3, 00:25:42.508 "num_base_bdevs_operational": 3, 00:25:42.508 "base_bdevs_list": [ 00:25:42.508 { 00:25:42.508 "name": null, 00:25:42.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.508 "is_configured": false, 00:25:42.508 "data_offset": 0, 00:25:42.508 "data_size": 63488 00:25:42.508 }, 00:25:42.508 { 00:25:42.508 "name": "BaseBdev2", 00:25:42.509 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 2048, 00:25:42.509 "data_size": 63488 00:25:42.509 }, 00:25:42.509 { 00:25:42.509 "name": "BaseBdev3", 00:25:42.509 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 2048, 00:25:42.509 "data_size": 63488 00:25:42.509 }, 00:25:42.509 { 00:25:42.509 "name": "BaseBdev4", 00:25:42.509 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 2048, 00:25:42.509 "data_size": 63488 00:25:42.509 } 00:25:42.509 ] 00:25:42.509 }' 00:25:42.509 07:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.509 07:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:42.769 "name": "raid_bdev1", 00:25:42.769 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:42.769 "strip_size_kb": 0, 00:25:42.769 "state": "online", 00:25:42.769 "raid_level": "raid1", 00:25:42.769 "superblock": true, 00:25:42.769 "num_base_bdevs": 4, 00:25:42.769 "num_base_bdevs_discovered": 3, 00:25:42.769 "num_base_bdevs_operational": 3, 00:25:42.769 "base_bdevs_list": [ 00:25:42.769 { 00:25:42.769 "name": null, 00:25:42.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.769 "is_configured": false, 00:25:42.769 "data_offset": 0, 00:25:42.769 "data_size": 63488 00:25:42.769 }, 00:25:42.769 { 00:25:42.769 "name": "BaseBdev2", 00:25:42.769 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:42.769 "is_configured": true, 00:25:42.769 "data_offset": 2048, 00:25:42.769 "data_size": 63488 00:25:42.769 }, 00:25:42.769 { 00:25:42.769 "name": "BaseBdev3", 00:25:42.769 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:42.769 "is_configured": true, 00:25:42.769 "data_offset": 2048, 00:25:42.769 "data_size": 63488 00:25:42.769 }, 00:25:42.769 { 00:25:42.769 "name": "BaseBdev4", 00:25:42.769 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:42.769 "is_configured": true, 00:25:42.769 "data_offset": 2048, 00:25:42.769 "data_size": 63488 00:25:42.769 } 00:25:42.769 ] 00:25:42.769 }' 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:42.769 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.029 [2024-10-07 07:46:42.382005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:43.029 [2024-10-07 07:46:42.397965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.029 07:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:43.029 [2024-10-07 07:46:42.400229] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:43.968 "name": "raid_bdev1", 00:25:43.968 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:43.968 "strip_size_kb": 0, 00:25:43.968 "state": "online", 00:25:43.968 "raid_level": "raid1", 00:25:43.968 "superblock": true, 00:25:43.968 "num_base_bdevs": 4, 00:25:43.968 "num_base_bdevs_discovered": 4, 00:25:43.968 "num_base_bdevs_operational": 4, 00:25:43.968 "process": { 00:25:43.968 "type": "rebuild", 00:25:43.968 "target": "spare", 00:25:43.968 "progress": { 00:25:43.968 "blocks": 20480, 00:25:43.968 "percent": 32 00:25:43.968 } 00:25:43.968 }, 00:25:43.968 "base_bdevs_list": [ 00:25:43.968 { 00:25:43.968 "name": "spare", 00:25:43.968 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:43.968 "is_configured": true, 00:25:43.968 "data_offset": 2048, 00:25:43.968 "data_size": 63488 00:25:43.968 }, 00:25:43.968 { 00:25:43.968 "name": "BaseBdev2", 00:25:43.968 "uuid": "760f79a6-b9a2-5a54-8b6c-d1210db6b83a", 00:25:43.968 "is_configured": true, 00:25:43.968 "data_offset": 2048, 00:25:43.968 "data_size": 63488 00:25:43.968 }, 00:25:43.968 { 00:25:43.968 "name": "BaseBdev3", 00:25:43.968 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:43.968 "is_configured": true, 00:25:43.968 "data_offset": 2048, 00:25:43.968 "data_size": 63488 00:25:43.968 }, 00:25:43.968 { 00:25:43.968 "name": "BaseBdev4", 00:25:43.968 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:43.968 "is_configured": true, 00:25:43.968 "data_offset": 2048, 00:25:43.968 "data_size": 63488 00:25:43.968 } 00:25:43.968 ] 00:25:43.968 }' 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.968 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:44.228 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.228 [2024-10-07 07:46:43.545865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:44.228 [2024-10-07 07:46:43.708328] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.228 "name": "raid_bdev1", 00:25:44.228 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:44.228 "strip_size_kb": 0, 00:25:44.228 "state": "online", 00:25:44.228 "raid_level": "raid1", 00:25:44.228 "superblock": true, 00:25:44.228 "num_base_bdevs": 4, 00:25:44.228 "num_base_bdevs_discovered": 3, 00:25:44.228 "num_base_bdevs_operational": 3, 00:25:44.228 "process": { 00:25:44.228 "type": "rebuild", 00:25:44.228 "target": "spare", 00:25:44.228 "progress": { 00:25:44.228 "blocks": 24576, 00:25:44.228 "percent": 38 00:25:44.228 } 00:25:44.228 }, 00:25:44.228 "base_bdevs_list": [ 00:25:44.228 { 00:25:44.228 "name": "spare", 00:25:44.228 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:44.228 "is_configured": true, 00:25:44.228 "data_offset": 2048, 00:25:44.228 "data_size": 63488 00:25:44.228 }, 00:25:44.228 { 00:25:44.228 "name": null, 00:25:44.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.228 "is_configured": false, 00:25:44.228 "data_offset": 0, 00:25:44.228 "data_size": 63488 00:25:44.228 }, 00:25:44.228 { 00:25:44.228 "name": "BaseBdev3", 00:25:44.228 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:44.228 "is_configured": true, 00:25:44.228 "data_offset": 2048, 00:25:44.228 "data_size": 63488 00:25:44.228 }, 00:25:44.228 { 00:25:44.228 "name": "BaseBdev4", 00:25:44.228 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:44.228 "is_configured": true, 00:25:44.228 "data_offset": 2048, 00:25:44.228 "data_size": 63488 00:25:44.228 } 00:25:44.228 ] 00:25:44.228 }' 00:25:44.228 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=493 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:44.489 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.489 "name": "raid_bdev1", 00:25:44.489 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:44.489 "strip_size_kb": 0, 00:25:44.489 "state": "online", 00:25:44.489 "raid_level": "raid1", 00:25:44.489 "superblock": true, 00:25:44.490 "num_base_bdevs": 4, 00:25:44.490 "num_base_bdevs_discovered": 3, 00:25:44.490 "num_base_bdevs_operational": 3, 00:25:44.490 "process": { 00:25:44.490 "type": "rebuild", 00:25:44.490 "target": "spare", 00:25:44.490 "progress": { 00:25:44.490 "blocks": 26624, 00:25:44.490 "percent": 41 00:25:44.490 } 00:25:44.490 }, 00:25:44.490 "base_bdevs_list": [ 00:25:44.490 { 00:25:44.490 "name": "spare", 00:25:44.490 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:44.490 "is_configured": true, 00:25:44.490 "data_offset": 2048, 00:25:44.490 "data_size": 63488 00:25:44.490 }, 00:25:44.490 { 00:25:44.490 "name": null, 00:25:44.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.490 "is_configured": false, 00:25:44.490 "data_offset": 0, 00:25:44.490 "data_size": 63488 00:25:44.490 }, 00:25:44.490 { 00:25:44.490 "name": "BaseBdev3", 00:25:44.490 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:44.490 "is_configured": true, 00:25:44.490 "data_offset": 2048, 00:25:44.490 "data_size": 63488 00:25:44.490 }, 00:25:44.490 { 00:25:44.490 "name": "BaseBdev4", 00:25:44.490 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:44.490 "is_configured": true, 00:25:44.490 "data_offset": 2048, 00:25:44.490 "data_size": 63488 00:25:44.490 } 00:25:44.490 ] 00:25:44.490 }' 00:25:44.490 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.490 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.490 07:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.490 07:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.490 07:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:45.874 "name": "raid_bdev1", 00:25:45.874 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:45.874 "strip_size_kb": 0, 00:25:45.874 "state": "online", 00:25:45.874 "raid_level": "raid1", 00:25:45.874 "superblock": true, 00:25:45.874 "num_base_bdevs": 4, 00:25:45.874 "num_base_bdevs_discovered": 3, 00:25:45.874 "num_base_bdevs_operational": 3, 00:25:45.874 "process": { 00:25:45.874 "type": "rebuild", 00:25:45.874 "target": "spare", 00:25:45.874 "progress": { 00:25:45.874 "blocks": 51200, 00:25:45.874 "percent": 80 00:25:45.874 } 00:25:45.874 }, 00:25:45.874 "base_bdevs_list": [ 00:25:45.874 { 00:25:45.874 "name": "spare", 00:25:45.874 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:45.874 "is_configured": true, 00:25:45.874 "data_offset": 2048, 00:25:45.874 "data_size": 63488 00:25:45.874 }, 00:25:45.874 { 00:25:45.874 "name": null, 00:25:45.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.874 "is_configured": false, 00:25:45.874 "data_offset": 0, 00:25:45.874 "data_size": 63488 00:25:45.874 }, 00:25:45.874 { 00:25:45.874 "name": "BaseBdev3", 00:25:45.874 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:45.874 "is_configured": true, 00:25:45.874 "data_offset": 2048, 00:25:45.874 "data_size": 63488 00:25:45.874 }, 00:25:45.874 { 00:25:45.874 "name": "BaseBdev4", 00:25:45.874 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:45.874 "is_configured": true, 00:25:45.874 "data_offset": 2048, 00:25:45.874 "data_size": 63488 00:25:45.874 } 00:25:45.874 ] 00:25:45.874 }' 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.874 07:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:46.133 [2024-10-07 07:46:45.621255] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:46.133 [2024-10-07 07:46:45.621335] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:46.133 [2024-10-07 07:46:45.621465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.702 "name": "raid_bdev1", 00:25:46.702 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:46.702 "strip_size_kb": 0, 00:25:46.702 "state": "online", 00:25:46.702 "raid_level": "raid1", 00:25:46.702 "superblock": true, 00:25:46.702 "num_base_bdevs": 4, 00:25:46.702 "num_base_bdevs_discovered": 3, 00:25:46.702 "num_base_bdevs_operational": 3, 00:25:46.702 "base_bdevs_list": [ 00:25:46.702 { 00:25:46.702 "name": "spare", 00:25:46.702 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:46.702 "is_configured": true, 00:25:46.702 "data_offset": 2048, 00:25:46.702 "data_size": 63488 00:25:46.702 }, 00:25:46.702 { 00:25:46.702 "name": null, 00:25:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.702 "is_configured": false, 00:25:46.702 "data_offset": 0, 00:25:46.702 "data_size": 63488 00:25:46.702 }, 00:25:46.702 { 00:25:46.702 "name": "BaseBdev3", 00:25:46.702 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:46.702 "is_configured": true, 00:25:46.702 "data_offset": 2048, 00:25:46.702 "data_size": 63488 00:25:46.702 }, 00:25:46.702 { 00:25:46.702 "name": "BaseBdev4", 00:25:46.702 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:46.702 "is_configured": true, 00:25:46.702 "data_offset": 2048, 00:25:46.702 "data_size": 63488 00:25:46.702 } 00:25:46.702 ] 00:25:46.702 }' 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:46.702 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.962 "name": "raid_bdev1", 00:25:46.962 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:46.962 "strip_size_kb": 0, 00:25:46.962 "state": "online", 00:25:46.962 "raid_level": "raid1", 00:25:46.962 "superblock": true, 00:25:46.962 "num_base_bdevs": 4, 00:25:46.962 "num_base_bdevs_discovered": 3, 00:25:46.962 "num_base_bdevs_operational": 3, 00:25:46.962 "base_bdevs_list": [ 00:25:46.962 { 00:25:46.962 "name": "spare", 00:25:46.962 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:46.962 "is_configured": true, 00:25:46.962 "data_offset": 2048, 00:25:46.962 "data_size": 63488 00:25:46.962 }, 00:25:46.962 { 00:25:46.962 "name": null, 00:25:46.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.962 "is_configured": false, 00:25:46.962 "data_offset": 0, 00:25:46.962 "data_size": 63488 00:25:46.962 }, 00:25:46.962 { 00:25:46.962 "name": "BaseBdev3", 00:25:46.962 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:46.962 "is_configured": true, 00:25:46.962 "data_offset": 2048, 00:25:46.962 "data_size": 63488 00:25:46.962 }, 00:25:46.962 { 00:25:46.962 "name": "BaseBdev4", 00:25:46.962 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:46.962 "is_configured": true, 00:25:46.962 "data_offset": 2048, 00:25:46.962 "data_size": 63488 00:25:46.962 } 00:25:46.962 ] 00:25:46.962 }' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:46.962 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.963 "name": "raid_bdev1", 00:25:46.963 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:46.963 "strip_size_kb": 0, 00:25:46.963 "state": "online", 00:25:46.963 "raid_level": "raid1", 00:25:46.963 "superblock": true, 00:25:46.963 "num_base_bdevs": 4, 00:25:46.963 "num_base_bdevs_discovered": 3, 00:25:46.963 "num_base_bdevs_operational": 3, 00:25:46.963 "base_bdevs_list": [ 00:25:46.963 { 00:25:46.963 "name": "spare", 00:25:46.963 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:46.963 "is_configured": true, 00:25:46.963 "data_offset": 2048, 00:25:46.963 "data_size": 63488 00:25:46.963 }, 00:25:46.963 { 00:25:46.963 "name": null, 00:25:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.963 "is_configured": false, 00:25:46.963 "data_offset": 0, 00:25:46.963 "data_size": 63488 00:25:46.963 }, 00:25:46.963 { 00:25:46.963 "name": "BaseBdev3", 00:25:46.963 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:46.963 "is_configured": true, 00:25:46.963 "data_offset": 2048, 00:25:46.963 "data_size": 63488 00:25:46.963 }, 00:25:46.963 { 00:25:46.963 "name": "BaseBdev4", 00:25:46.963 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:46.963 "is_configured": true, 00:25:46.963 "data_offset": 2048, 00:25:46.963 "data_size": 63488 00:25:46.963 } 00:25:46.963 ] 00:25:46.963 }' 00:25:46.963 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.963 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.531 [2024-10-07 07:46:46.822834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.531 [2024-10-07 07:46:46.822994] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.531 [2024-10-07 07:46:46.823171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.531 [2024-10-07 07:46:46.823264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.531 [2024-10-07 07:46:46.823278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.531 07:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:47.791 /dev/nbd0 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.791 1+0 records in 00:25:47.791 1+0 records out 00:25:47.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384346 s, 10.7 MB/s 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.791 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:48.050 /dev/nbd1 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:25:48.050 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:48.050 1+0 records in 00:25:48.050 1+0 records out 00:25:48.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050326 s, 8.1 MB/s 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:48.051 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.310 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.569 07:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.829 [2024-10-07 07:46:48.206129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.829 [2024-10-07 07:46:48.206305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.829 [2024-10-07 07:46:48.206341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:48.829 [2024-10-07 07:46:48.206354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.829 [2024-10-07 07:46:48.208950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.829 [2024-10-07 07:46:48.208991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.829 [2024-10-07 07:46:48.209101] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:48.829 [2024-10-07 07:46:48.209153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.829 spare 00:25:48.829 [2024-10-07 07:46:48.209323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:48.829 [2024-10-07 07:46:48.209439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.829 [2024-10-07 07:46:48.309546] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:48.829 [2024-10-07 07:46:48.309582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.829 [2024-10-07 07:46:48.309980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:25:48.829 [2024-10-07 07:46:48.310195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:48.829 [2024-10-07 07:46:48.310214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:48.829 [2024-10-07 07:46:48.310431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.829 "name": "raid_bdev1", 00:25:48.829 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:48.829 "strip_size_kb": 0, 00:25:48.829 "state": "online", 00:25:48.829 "raid_level": "raid1", 00:25:48.829 "superblock": true, 00:25:48.829 "num_base_bdevs": 4, 00:25:48.829 "num_base_bdevs_discovered": 3, 00:25:48.829 "num_base_bdevs_operational": 3, 00:25:48.829 "base_bdevs_list": [ 00:25:48.829 { 00:25:48.829 "name": "spare", 00:25:48.829 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:48.829 "is_configured": true, 00:25:48.829 "data_offset": 2048, 00:25:48.829 "data_size": 63488 00:25:48.829 }, 00:25:48.829 { 00:25:48.829 "name": null, 00:25:48.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.829 "is_configured": false, 00:25:48.829 "data_offset": 2048, 00:25:48.829 "data_size": 63488 00:25:48.829 }, 00:25:48.829 { 00:25:48.829 "name": "BaseBdev3", 00:25:48.829 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:48.829 "is_configured": true, 00:25:48.829 "data_offset": 2048, 00:25:48.829 "data_size": 63488 00:25:48.829 }, 00:25:48.829 { 00:25:48.829 "name": "BaseBdev4", 00:25:48.829 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:48.829 "is_configured": true, 00:25:48.829 "data_offset": 2048, 00:25:48.829 "data_size": 63488 00:25:48.829 } 00:25:48.829 ] 00:25:48.829 }' 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.829 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.397 "name": "raid_bdev1", 00:25:49.397 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:49.397 "strip_size_kb": 0, 00:25:49.397 "state": "online", 00:25:49.397 "raid_level": "raid1", 00:25:49.397 "superblock": true, 00:25:49.397 "num_base_bdevs": 4, 00:25:49.397 "num_base_bdevs_discovered": 3, 00:25:49.397 "num_base_bdevs_operational": 3, 00:25:49.397 "base_bdevs_list": [ 00:25:49.397 { 00:25:49.397 "name": "spare", 00:25:49.397 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:49.397 "is_configured": true, 00:25:49.397 "data_offset": 2048, 00:25:49.397 "data_size": 63488 00:25:49.397 }, 00:25:49.397 { 00:25:49.397 "name": null, 00:25:49.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.397 "is_configured": false, 00:25:49.397 "data_offset": 2048, 00:25:49.397 "data_size": 63488 00:25:49.397 }, 00:25:49.397 { 00:25:49.397 "name": "BaseBdev3", 00:25:49.397 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:49.397 "is_configured": true, 00:25:49.397 "data_offset": 2048, 00:25:49.397 "data_size": 63488 00:25:49.397 }, 00:25:49.397 { 00:25:49.397 "name": "BaseBdev4", 00:25:49.397 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:49.397 "is_configured": true, 00:25:49.397 "data_offset": 2048, 00:25:49.397 "data_size": 63488 00:25:49.397 } 00:25:49.397 ] 00:25:49.397 }' 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.397 [2024-10-07 07:46:48.918522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:49.397 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.398 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:49.657 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.657 "name": "raid_bdev1", 00:25:49.657 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:49.657 "strip_size_kb": 0, 00:25:49.657 "state": "online", 00:25:49.657 "raid_level": "raid1", 00:25:49.657 "superblock": true, 00:25:49.657 "num_base_bdevs": 4, 00:25:49.657 "num_base_bdevs_discovered": 2, 00:25:49.657 "num_base_bdevs_operational": 2, 00:25:49.657 "base_bdevs_list": [ 00:25:49.657 { 00:25:49.657 "name": null, 00:25:49.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.657 "is_configured": false, 00:25:49.657 "data_offset": 0, 00:25:49.657 "data_size": 63488 00:25:49.657 }, 00:25:49.657 { 00:25:49.657 "name": null, 00:25:49.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.657 "is_configured": false, 00:25:49.657 "data_offset": 2048, 00:25:49.657 "data_size": 63488 00:25:49.657 }, 00:25:49.657 { 00:25:49.657 "name": "BaseBdev3", 00:25:49.657 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:49.657 "is_configured": true, 00:25:49.657 "data_offset": 2048, 00:25:49.657 "data_size": 63488 00:25:49.657 }, 00:25:49.657 { 00:25:49.657 "name": "BaseBdev4", 00:25:49.657 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:49.657 "is_configured": true, 00:25:49.657 "data_offset": 2048, 00:25:49.657 "data_size": 63488 00:25:49.657 } 00:25:49.657 ] 00:25:49.657 }' 00:25:49.657 07:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.657 07:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.917 07:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.917 07:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:49.917 07:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.917 [2024-10-07 07:46:49.382627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.917 [2024-10-07 07:46:49.382962] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:49.917 [2024-10-07 07:46:49.382989] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:49.917 [2024-10-07 07:46:49.383034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.917 [2024-10-07 07:46:49.397537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:25:49.917 07:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:49.917 07:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:49.917 [2024-10-07 07:46:49.399774] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:50.856 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:50.857 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.116 "name": "raid_bdev1", 00:25:51.116 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:51.116 "strip_size_kb": 0, 00:25:51.116 "state": "online", 00:25:51.116 "raid_level": "raid1", 00:25:51.116 "superblock": true, 00:25:51.116 "num_base_bdevs": 4, 00:25:51.116 "num_base_bdevs_discovered": 3, 00:25:51.116 "num_base_bdevs_operational": 3, 00:25:51.116 "process": { 00:25:51.116 "type": "rebuild", 00:25:51.116 "target": "spare", 00:25:51.116 "progress": { 00:25:51.116 "blocks": 20480, 00:25:51.116 "percent": 32 00:25:51.116 } 00:25:51.116 }, 00:25:51.116 "base_bdevs_list": [ 00:25:51.116 { 00:25:51.116 "name": "spare", 00:25:51.116 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:51.116 "is_configured": true, 00:25:51.116 "data_offset": 2048, 00:25:51.116 "data_size": 63488 00:25:51.116 }, 00:25:51.116 { 00:25:51.116 "name": null, 00:25:51.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.116 "is_configured": false, 00:25:51.116 "data_offset": 2048, 00:25:51.116 "data_size": 63488 00:25:51.116 }, 00:25:51.116 { 00:25:51.116 "name": "BaseBdev3", 00:25:51.116 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:51.116 "is_configured": true, 00:25:51.116 "data_offset": 2048, 00:25:51.116 "data_size": 63488 00:25:51.116 }, 00:25:51.116 { 00:25:51.116 "name": "BaseBdev4", 00:25:51.116 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:51.116 "is_configured": true, 00:25:51.116 "data_offset": 2048, 00:25:51.116 "data_size": 63488 00:25:51.116 } 00:25:51.116 ] 00:25:51.116 }' 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.116 [2024-10-07 07:46:50.537847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.116 [2024-10-07 07:46:50.607560] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.116 [2024-10-07 07:46:50.607638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.116 [2024-10-07 07:46:50.607660] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.116 [2024-10-07 07:46:50.607669] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.116 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:51.375 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.375 "name": "raid_bdev1", 00:25:51.375 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:51.375 "strip_size_kb": 0, 00:25:51.375 "state": "online", 00:25:51.375 "raid_level": "raid1", 00:25:51.375 "superblock": true, 00:25:51.375 "num_base_bdevs": 4, 00:25:51.375 "num_base_bdevs_discovered": 2, 00:25:51.375 "num_base_bdevs_operational": 2, 00:25:51.375 "base_bdevs_list": [ 00:25:51.375 { 00:25:51.375 "name": null, 00:25:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.375 "is_configured": false, 00:25:51.375 "data_offset": 0, 00:25:51.375 "data_size": 63488 00:25:51.375 }, 00:25:51.375 { 00:25:51.375 "name": null, 00:25:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.375 "is_configured": false, 00:25:51.375 "data_offset": 2048, 00:25:51.375 "data_size": 63488 00:25:51.375 }, 00:25:51.375 { 00:25:51.375 "name": "BaseBdev3", 00:25:51.375 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:51.375 "is_configured": true, 00:25:51.375 "data_offset": 2048, 00:25:51.375 "data_size": 63488 00:25:51.375 }, 00:25:51.375 { 00:25:51.375 "name": "BaseBdev4", 00:25:51.376 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:51.376 "is_configured": true, 00:25:51.376 "data_offset": 2048, 00:25:51.376 "data_size": 63488 00:25:51.376 } 00:25:51.376 ] 00:25:51.376 }' 00:25:51.376 07:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.376 07:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 07:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:51.635 07:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:51.635 07:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 [2024-10-07 07:46:51.089426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:51.635 [2024-10-07 07:46:51.089496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.635 [2024-10-07 07:46:51.089530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:51.635 [2024-10-07 07:46:51.089543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.635 [2024-10-07 07:46:51.090075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.635 [2024-10-07 07:46:51.090101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:51.635 [2024-10-07 07:46:51.090199] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:51.635 [2024-10-07 07:46:51.090213] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:51.635 [2024-10-07 07:46:51.090230] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:51.635 [2024-10-07 07:46:51.090255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:51.635 spare 00:25:51.635 [2024-10-07 07:46:51.105032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:25:51.635 07:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:51.635 07:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:51.635 [2024-10-07 07:46:51.107176] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:52.574 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:52.860 "name": "raid_bdev1", 00:25:52.860 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:52.860 "strip_size_kb": 0, 00:25:52.860 "state": "online", 00:25:52.860 "raid_level": "raid1", 00:25:52.860 "superblock": true, 00:25:52.860 "num_base_bdevs": 4, 00:25:52.860 "num_base_bdevs_discovered": 3, 00:25:52.860 "num_base_bdevs_operational": 3, 00:25:52.860 "process": { 00:25:52.860 "type": "rebuild", 00:25:52.860 "target": "spare", 00:25:52.860 "progress": { 00:25:52.860 "blocks": 20480, 00:25:52.860 "percent": 32 00:25:52.860 } 00:25:52.860 }, 00:25:52.860 "base_bdevs_list": [ 00:25:52.860 { 00:25:52.860 "name": "spare", 00:25:52.860 "uuid": "981d8c01-9f16-5ced-ab0c-ee05abf88cbf", 00:25:52.860 "is_configured": true, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": null, 00:25:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.860 "is_configured": false, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": "BaseBdev3", 00:25:52.860 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:52.860 "is_configured": true, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": "BaseBdev4", 00:25:52.860 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:52.860 "is_configured": true, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 } 00:25:52.860 ] 00:25:52.860 }' 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.860 [2024-10-07 07:46:52.248950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.860 [2024-10-07 07:46:52.315291] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:52.860 [2024-10-07 07:46:52.315488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.860 [2024-10-07 07:46:52.315510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.860 [2024-10-07 07:46:52.315524] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.860 "name": "raid_bdev1", 00:25:52.860 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:52.860 "strip_size_kb": 0, 00:25:52.860 "state": "online", 00:25:52.860 "raid_level": "raid1", 00:25:52.860 "superblock": true, 00:25:52.860 "num_base_bdevs": 4, 00:25:52.860 "num_base_bdevs_discovered": 2, 00:25:52.860 "num_base_bdevs_operational": 2, 00:25:52.860 "base_bdevs_list": [ 00:25:52.860 { 00:25:52.860 "name": null, 00:25:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.860 "is_configured": false, 00:25:52.860 "data_offset": 0, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": null, 00:25:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.860 "is_configured": false, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": "BaseBdev3", 00:25:52.860 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:52.860 "is_configured": true, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 }, 00:25:52.860 { 00:25:52.860 "name": "BaseBdev4", 00:25:52.860 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:52.860 "is_configured": true, 00:25:52.860 "data_offset": 2048, 00:25:52.860 "data_size": 63488 00:25:52.860 } 00:25:52.860 ] 00:25:52.860 }' 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.860 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:53.446 "name": "raid_bdev1", 00:25:53.446 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:53.446 "strip_size_kb": 0, 00:25:53.446 "state": "online", 00:25:53.446 "raid_level": "raid1", 00:25:53.446 "superblock": true, 00:25:53.446 "num_base_bdevs": 4, 00:25:53.446 "num_base_bdevs_discovered": 2, 00:25:53.446 "num_base_bdevs_operational": 2, 00:25:53.446 "base_bdevs_list": [ 00:25:53.446 { 00:25:53.446 "name": null, 00:25:53.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.446 "is_configured": false, 00:25:53.446 "data_offset": 0, 00:25:53.446 "data_size": 63488 00:25:53.446 }, 00:25:53.446 { 00:25:53.446 "name": null, 00:25:53.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.446 "is_configured": false, 00:25:53.446 "data_offset": 2048, 00:25:53.446 "data_size": 63488 00:25:53.446 }, 00:25:53.446 { 00:25:53.446 "name": "BaseBdev3", 00:25:53.446 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:53.446 "is_configured": true, 00:25:53.446 "data_offset": 2048, 00:25:53.446 "data_size": 63488 00:25:53.446 }, 00:25:53.446 { 00:25:53.446 "name": "BaseBdev4", 00:25:53.446 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:53.446 "is_configured": true, 00:25:53.446 "data_offset": 2048, 00:25:53.446 "data_size": 63488 00:25:53.446 } 00:25:53.446 ] 00:25:53.446 }' 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 [2024-10-07 07:46:52.920759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:53.446 [2024-10-07 07:46:52.920957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.446 [2024-10-07 07:46:52.921020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:25:53.446 [2024-10-07 07:46:52.921119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.446 [2024-10-07 07:46:52.921709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.446 [2024-10-07 07:46:52.921868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:53.446 [2024-10-07 07:46:52.921973] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:53.446 [2024-10-07 07:46:52.921995] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:53.446 [2024-10-07 07:46:52.922006] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:53.446 [2024-10-07 07:46:52.922025] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:53.446 BaseBdev1 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:53.446 07:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:54.383 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:54.384 07:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.643 07:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:54.643 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.643 "name": "raid_bdev1", 00:25:54.643 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:54.643 "strip_size_kb": 0, 00:25:54.643 "state": "online", 00:25:54.643 "raid_level": "raid1", 00:25:54.643 "superblock": true, 00:25:54.643 "num_base_bdevs": 4, 00:25:54.643 "num_base_bdevs_discovered": 2, 00:25:54.643 "num_base_bdevs_operational": 2, 00:25:54.643 "base_bdevs_list": [ 00:25:54.643 { 00:25:54.643 "name": null, 00:25:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.643 "is_configured": false, 00:25:54.643 "data_offset": 0, 00:25:54.643 "data_size": 63488 00:25:54.643 }, 00:25:54.643 { 00:25:54.643 "name": null, 00:25:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.643 "is_configured": false, 00:25:54.643 "data_offset": 2048, 00:25:54.643 "data_size": 63488 00:25:54.643 }, 00:25:54.643 { 00:25:54.643 "name": "BaseBdev3", 00:25:54.643 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:54.643 "is_configured": true, 00:25:54.643 "data_offset": 2048, 00:25:54.643 "data_size": 63488 00:25:54.643 }, 00:25:54.643 { 00:25:54.643 "name": "BaseBdev4", 00:25:54.643 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:54.643 "is_configured": true, 00:25:54.643 "data_offset": 2048, 00:25:54.643 "data_size": 63488 00:25:54.643 } 00:25:54.643 ] 00:25:54.643 }' 00:25:54.643 07:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.643 07:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.906 "name": "raid_bdev1", 00:25:54.906 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:54.906 "strip_size_kb": 0, 00:25:54.906 "state": "online", 00:25:54.906 "raid_level": "raid1", 00:25:54.906 "superblock": true, 00:25:54.906 "num_base_bdevs": 4, 00:25:54.906 "num_base_bdevs_discovered": 2, 00:25:54.906 "num_base_bdevs_operational": 2, 00:25:54.906 "base_bdevs_list": [ 00:25:54.906 { 00:25:54.906 "name": null, 00:25:54.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.906 "is_configured": false, 00:25:54.906 "data_offset": 0, 00:25:54.906 "data_size": 63488 00:25:54.906 }, 00:25:54.906 { 00:25:54.906 "name": null, 00:25:54.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.906 "is_configured": false, 00:25:54.906 "data_offset": 2048, 00:25:54.906 "data_size": 63488 00:25:54.906 }, 00:25:54.906 { 00:25:54.906 "name": "BaseBdev3", 00:25:54.906 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:54.906 "is_configured": true, 00:25:54.906 "data_offset": 2048, 00:25:54.906 "data_size": 63488 00:25:54.906 }, 00:25:54.906 { 00:25:54.906 "name": "BaseBdev4", 00:25:54.906 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:54.906 "is_configured": true, 00:25:54.906 "data_offset": 2048, 00:25:54.906 "data_size": 63488 00:25:54.906 } 00:25:54.906 ] 00:25:54.906 }' 00:25:54.906 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # local es=0 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.167 [2024-10-07 07:46:54.529173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.167 [2024-10-07 07:46:54.529489] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:55.167 [2024-10-07 07:46:54.529512] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:55.167 request: 00:25:55.167 { 00:25:55.167 "base_bdev": "BaseBdev1", 00:25:55.167 "raid_bdev": "raid_bdev1", 00:25:55.167 "method": "bdev_raid_add_base_bdev", 00:25:55.167 "req_id": 1 00:25:55.167 } 00:25:55.167 Got JSON-RPC error response 00:25:55.167 response: 00:25:55.167 { 00:25:55.167 "code": -22, 00:25:55.167 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:55.167 } 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@656 -- # es=1 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:25:55.167 07:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.106 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.106 "name": "raid_bdev1", 00:25:56.106 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:56.106 "strip_size_kb": 0, 00:25:56.106 "state": "online", 00:25:56.106 "raid_level": "raid1", 00:25:56.106 "superblock": true, 00:25:56.106 "num_base_bdevs": 4, 00:25:56.106 "num_base_bdevs_discovered": 2, 00:25:56.106 "num_base_bdevs_operational": 2, 00:25:56.106 "base_bdevs_list": [ 00:25:56.106 { 00:25:56.106 "name": null, 00:25:56.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.106 "is_configured": false, 00:25:56.106 "data_offset": 0, 00:25:56.106 "data_size": 63488 00:25:56.106 }, 00:25:56.106 { 00:25:56.106 "name": null, 00:25:56.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.106 "is_configured": false, 00:25:56.106 "data_offset": 2048, 00:25:56.106 "data_size": 63488 00:25:56.106 }, 00:25:56.106 { 00:25:56.106 "name": "BaseBdev3", 00:25:56.106 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:56.106 "is_configured": true, 00:25:56.106 "data_offset": 2048, 00:25:56.106 "data_size": 63488 00:25:56.106 }, 00:25:56.106 { 00:25:56.106 "name": "BaseBdev4", 00:25:56.106 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:56.106 "is_configured": true, 00:25:56.106 "data_offset": 2048, 00:25:56.106 "data_size": 63488 00:25:56.106 } 00:25:56.106 ] 00:25:56.106 }' 00:25:56.107 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.107 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.677 07:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:56.677 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:56.677 "name": "raid_bdev1", 00:25:56.677 "uuid": "9fdd0fcf-0791-4b9c-9695-8e26bc021f0f", 00:25:56.677 "strip_size_kb": 0, 00:25:56.677 "state": "online", 00:25:56.677 "raid_level": "raid1", 00:25:56.677 "superblock": true, 00:25:56.677 "num_base_bdevs": 4, 00:25:56.677 "num_base_bdevs_discovered": 2, 00:25:56.677 "num_base_bdevs_operational": 2, 00:25:56.677 "base_bdevs_list": [ 00:25:56.677 { 00:25:56.677 "name": null, 00:25:56.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.677 "is_configured": false, 00:25:56.677 "data_offset": 0, 00:25:56.677 "data_size": 63488 00:25:56.677 }, 00:25:56.677 { 00:25:56.677 "name": null, 00:25:56.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.677 "is_configured": false, 00:25:56.677 "data_offset": 2048, 00:25:56.678 "data_size": 63488 00:25:56.678 }, 00:25:56.678 { 00:25:56.678 "name": "BaseBdev3", 00:25:56.678 "uuid": "c85fb2f3-0629-5802-85b0-6ec3148af50c", 00:25:56.678 "is_configured": true, 00:25:56.678 "data_offset": 2048, 00:25:56.678 "data_size": 63488 00:25:56.678 }, 00:25:56.678 { 00:25:56.678 "name": "BaseBdev4", 00:25:56.678 "uuid": "e6364c4b-cdae-5e22-b09a-631827e5b5e8", 00:25:56.678 "is_configured": true, 00:25:56.678 "data_offset": 2048, 00:25:56.678 "data_size": 63488 00:25:56.678 } 00:25:56.678 ] 00:25:56.678 }' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78212 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' -z 78212 ']' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # kill -0 78212 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # uname 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 78212 00:25:56.678 killing process with pid 78212 00:25:56.678 Received shutdown signal, test time was about 60.000000 seconds 00:25:56.678 00:25:56.678 Latency(us) 00:25:56.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.678 =================================================================================================================== 00:25:56.678 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 78212' 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # kill 78212 00:25:56.678 [2024-10-07 07:46:56.124031] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:56.678 07:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@977 -- # wait 78212 00:25:56.678 [2024-10-07 07:46:56.124158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:56.678 [2024-10-07 07:46:56.124233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:56.678 [2024-10-07 07:46:56.124244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:57.246 [2024-10-07 07:46:56.649094] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:58.625 ************************************ 00:25:58.625 END TEST raid_rebuild_test_sb 00:25:58.625 ************************************ 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:25:58.625 00:25:58.625 real 0m26.451s 00:25:58.625 user 0m31.993s 00:25:58.625 sys 0m4.292s 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.625 07:46:58 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:25:58.625 07:46:58 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:25:58.625 07:46:58 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:25:58.625 07:46:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:58.625 ************************************ 00:25:58.625 START TEST raid_rebuild_test_io 00:25:58.625 ************************************ 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 4 false true true 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78979 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78979 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # '[' -z 78979 ']' 00:25:58.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local max_retries=100 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@843 -- # xtrace_disable 00:25:58.625 07:46:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:58.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:58.884 Zero copy mechanism will not be used. 00:25:58.884 [2024-10-07 07:46:58.214793] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:25:58.884 [2024-10-07 07:46:58.214978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78979 ] 00:25:58.884 [2024-10-07 07:46:58.403821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.142 [2024-10-07 07:46:58.648994] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.400 [2024-10-07 07:46:58.882801] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:59.400 [2024-10-07 07:46:58.882874] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:59.658 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:25:59.658 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # return 0 00:25:59.659 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:59.659 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:59.659 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.659 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 BaseBdev1_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 [2024-10-07 07:46:59.245576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:59.918 [2024-10-07 07:46:59.245840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.918 [2024-10-07 07:46:59.245878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:59.918 [2024-10-07 07:46:59.245899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.918 [2024-10-07 07:46:59.248521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.918 [2024-10-07 07:46:59.248566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:59.918 BaseBdev1 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 BaseBdev2_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 [2024-10-07 07:46:59.318911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:59.918 [2024-10-07 07:46:59.318991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.918 [2024-10-07 07:46:59.319033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:59.918 [2024-10-07 07:46:59.319052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.918 [2024-10-07 07:46:59.321738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.918 [2024-10-07 07:46:59.321785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:59.918 BaseBdev2 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 BaseBdev3_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 [2024-10-07 07:46:59.369691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:59.918 [2024-10-07 07:46:59.369768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.918 [2024-10-07 07:46:59.369797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:59.918 [2024-10-07 07:46:59.369813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.918 [2024-10-07 07:46:59.372361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.918 [2024-10-07 07:46:59.372415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:59.918 BaseBdev3 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 BaseBdev4_malloc 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.918 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.918 [2024-10-07 07:46:59.424301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:59.918 [2024-10-07 07:46:59.424373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.918 [2024-10-07 07:46:59.424398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:59.919 [2024-10-07 07:46:59.424413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.919 [2024-10-07 07:46:59.427029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.919 [2024-10-07 07:46:59.427079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:59.919 BaseBdev4 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.919 spare_malloc 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:25:59.919 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 spare_delay 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 [2024-10-07 07:46:59.488128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:00.178 [2024-10-07 07:46:59.488311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.178 [2024-10-07 07:46:59.488345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:00.178 [2024-10-07 07:46:59.488360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.178 [2024-10-07 07:46:59.490886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.178 [2024-10-07 07:46:59.490924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:00.178 spare 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 [2024-10-07 07:46:59.496187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:00.178 [2024-10-07 07:46:59.498516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.178 [2024-10-07 07:46:59.498586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:00.178 [2024-10-07 07:46:59.498641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:00.178 [2024-10-07 07:46:59.498754] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:00.178 [2024-10-07 07:46:59.498769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:00.178 [2024-10-07 07:46:59.499083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:00.178 [2024-10-07 07:46:59.499294] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:00.178 [2024-10-07 07:46:59.499319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:00.178 [2024-10-07 07:46:59.499496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.178 "name": "raid_bdev1", 00:26:00.178 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:00.178 "strip_size_kb": 0, 00:26:00.178 "state": "online", 00:26:00.178 "raid_level": "raid1", 00:26:00.178 "superblock": false, 00:26:00.178 "num_base_bdevs": 4, 00:26:00.178 "num_base_bdevs_discovered": 4, 00:26:00.178 "num_base_bdevs_operational": 4, 00:26:00.178 "base_bdevs_list": [ 00:26:00.178 { 00:26:00.178 "name": "BaseBdev1", 00:26:00.178 "uuid": "3672e38a-70bd-52d3-89c0-6456342c7c2a", 00:26:00.178 "is_configured": true, 00:26:00.178 "data_offset": 0, 00:26:00.178 "data_size": 65536 00:26:00.178 }, 00:26:00.178 { 00:26:00.178 "name": "BaseBdev2", 00:26:00.178 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:00.178 "is_configured": true, 00:26:00.178 "data_offset": 0, 00:26:00.178 "data_size": 65536 00:26:00.178 }, 00:26:00.178 { 00:26:00.178 "name": "BaseBdev3", 00:26:00.178 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:00.178 "is_configured": true, 00:26:00.178 "data_offset": 0, 00:26:00.178 "data_size": 65536 00:26:00.178 }, 00:26:00.178 { 00:26:00.178 "name": "BaseBdev4", 00:26:00.178 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:00.178 "is_configured": true, 00:26:00.178 "data_offset": 0, 00:26:00.178 "data_size": 65536 00:26:00.178 } 00:26:00.178 ] 00:26:00.178 }' 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.178 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:00.438 [2024-10-07 07:46:59.936665] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.438 07:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.697 [2024-10-07 07:47:00.024314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.697 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.697 "name": "raid_bdev1", 00:26:00.697 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:00.697 "strip_size_kb": 0, 00:26:00.697 "state": "online", 00:26:00.697 "raid_level": "raid1", 00:26:00.697 "superblock": false, 00:26:00.697 "num_base_bdevs": 4, 00:26:00.697 "num_base_bdevs_discovered": 3, 00:26:00.697 "num_base_bdevs_operational": 3, 00:26:00.697 "base_bdevs_list": [ 00:26:00.697 { 00:26:00.697 "name": null, 00:26:00.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.697 "is_configured": false, 00:26:00.697 "data_offset": 0, 00:26:00.697 "data_size": 65536 00:26:00.697 }, 00:26:00.697 { 00:26:00.697 "name": "BaseBdev2", 00:26:00.697 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:00.697 "is_configured": true, 00:26:00.697 "data_offset": 0, 00:26:00.697 "data_size": 65536 00:26:00.697 }, 00:26:00.697 { 00:26:00.697 "name": "BaseBdev3", 00:26:00.697 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:00.697 "is_configured": true, 00:26:00.697 "data_offset": 0, 00:26:00.697 "data_size": 65536 00:26:00.697 }, 00:26:00.698 { 00:26:00.698 "name": "BaseBdev4", 00:26:00.698 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:00.698 "is_configured": true, 00:26:00.698 "data_offset": 0, 00:26:00.698 "data_size": 65536 00:26:00.698 } 00:26:00.698 ] 00:26:00.698 }' 00:26:00.698 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.698 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.698 [2024-10-07 07:47:00.161104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:00.698 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:00.698 Zero copy mechanism will not be used. 00:26:00.698 Running I/O for 60 seconds... 00:26:00.957 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:00.957 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:00.957 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.957 [2024-10-07 07:47:00.413916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.957 07:47:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:00.957 07:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:00.957 [2024-10-07 07:47:00.500945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:00.957 [2024-10-07 07:47:00.503460] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:01.217 [2024-10-07 07:47:00.639514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:01.217 [2024-10-07 07:47:00.641073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:01.476 [2024-10-07 07:47:00.858475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:01.477 [2024-10-07 07:47:00.858840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:01.736 138.00 IOPS, 414.00 MiB/s [2024-10-07 07:47:01.204598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:01.736 [2024-10-07 07:47:01.205227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:01.996 [2024-10-07 07:47:01.326268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:01.996 "name": "raid_bdev1", 00:26:01.996 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:01.996 "strip_size_kb": 0, 00:26:01.996 "state": "online", 00:26:01.996 "raid_level": "raid1", 00:26:01.996 "superblock": false, 00:26:01.996 "num_base_bdevs": 4, 00:26:01.996 "num_base_bdevs_discovered": 4, 00:26:01.996 "num_base_bdevs_operational": 4, 00:26:01.996 "process": { 00:26:01.996 "type": "rebuild", 00:26:01.996 "target": "spare", 00:26:01.996 "progress": { 00:26:01.996 "blocks": 10240, 00:26:01.996 "percent": 15 00:26:01.996 } 00:26:01.996 }, 00:26:01.996 "base_bdevs_list": [ 00:26:01.996 { 00:26:01.996 "name": "spare", 00:26:01.996 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:01.996 "is_configured": true, 00:26:01.996 "data_offset": 0, 00:26:01.996 "data_size": 65536 00:26:01.996 }, 00:26:01.996 { 00:26:01.996 "name": "BaseBdev2", 00:26:01.996 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:01.996 "is_configured": true, 00:26:01.996 "data_offset": 0, 00:26:01.996 "data_size": 65536 00:26:01.996 }, 00:26:01.996 { 00:26:01.996 "name": "BaseBdev3", 00:26:01.996 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:01.996 "is_configured": true, 00:26:01.996 "data_offset": 0, 00:26:01.996 "data_size": 65536 00:26:01.996 }, 00:26:01.996 { 00:26:01.996 "name": "BaseBdev4", 00:26:01.996 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:01.996 "is_configured": true, 00:26:01.996 "data_offset": 0, 00:26:01.996 "data_size": 65536 00:26:01.996 } 00:26:01.996 ] 00:26:01.996 }' 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.996 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:02.255 [2024-10-07 07:47:01.589540] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.255 [2024-10-07 07:47:01.695782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:02.255 [2024-10-07 07:47:01.703872] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:02.255 [2024-10-07 07:47:01.714930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.255 [2024-10-07 07:47:01.714983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.255 [2024-10-07 07:47:01.715020] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:02.255 [2024-10-07 07:47:01.754936] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:02.255 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:02.256 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:02.514 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.514 "name": "raid_bdev1", 00:26:02.514 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:02.514 "strip_size_kb": 0, 00:26:02.514 "state": "online", 00:26:02.514 "raid_level": "raid1", 00:26:02.514 "superblock": false, 00:26:02.514 "num_base_bdevs": 4, 00:26:02.514 "num_base_bdevs_discovered": 3, 00:26:02.514 "num_base_bdevs_operational": 3, 00:26:02.514 "base_bdevs_list": [ 00:26:02.514 { 00:26:02.514 "name": null, 00:26:02.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.515 "is_configured": false, 00:26:02.515 "data_offset": 0, 00:26:02.515 "data_size": 65536 00:26:02.515 }, 00:26:02.515 { 00:26:02.515 "name": "BaseBdev2", 00:26:02.515 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:02.515 "is_configured": true, 00:26:02.515 "data_offset": 0, 00:26:02.515 "data_size": 65536 00:26:02.515 }, 00:26:02.515 { 00:26:02.515 "name": "BaseBdev3", 00:26:02.515 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:02.515 "is_configured": true, 00:26:02.515 "data_offset": 0, 00:26:02.515 "data_size": 65536 00:26:02.515 }, 00:26:02.515 { 00:26:02.515 "name": "BaseBdev4", 00:26:02.515 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:02.515 "is_configured": true, 00:26:02.515 "data_offset": 0, 00:26:02.515 "data_size": 65536 00:26:02.515 } 00:26:02.515 ] 00:26:02.515 }' 00:26:02.515 07:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.515 07:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:02.774 139.00 IOPS, 417.00 MiB/s 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:02.774 "name": "raid_bdev1", 00:26:02.774 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:02.774 "strip_size_kb": 0, 00:26:02.774 "state": "online", 00:26:02.774 "raid_level": "raid1", 00:26:02.774 "superblock": false, 00:26:02.774 "num_base_bdevs": 4, 00:26:02.774 "num_base_bdevs_discovered": 3, 00:26:02.774 "num_base_bdevs_operational": 3, 00:26:02.774 "base_bdevs_list": [ 00:26:02.774 { 00:26:02.774 "name": null, 00:26:02.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.774 "is_configured": false, 00:26:02.774 "data_offset": 0, 00:26:02.774 "data_size": 65536 00:26:02.774 }, 00:26:02.774 { 00:26:02.774 "name": "BaseBdev2", 00:26:02.774 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:02.774 "is_configured": true, 00:26:02.774 "data_offset": 0, 00:26:02.774 "data_size": 65536 00:26:02.774 }, 00:26:02.774 { 00:26:02.774 "name": "BaseBdev3", 00:26:02.774 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:02.774 "is_configured": true, 00:26:02.774 "data_offset": 0, 00:26:02.774 "data_size": 65536 00:26:02.774 }, 00:26:02.774 { 00:26:02.774 "name": "BaseBdev4", 00:26:02.774 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:02.774 "is_configured": true, 00:26:02.774 "data_offset": 0, 00:26:02.774 "data_size": 65536 00:26:02.774 } 00:26:02.774 ] 00:26:02.774 }' 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:02.774 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:03.033 [2024-10-07 07:47:02.352747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:03.033 07:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:03.033 [2024-10-07 07:47:02.409741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:03.033 [2024-10-07 07:47:02.412339] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:03.033 [2024-10-07 07:47:02.524038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:03.033 [2024-10-07 07:47:02.524667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:03.292 [2024-10-07 07:47:02.727835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:03.292 [2024-10-07 07:47:02.728200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:03.552 [2024-10-07 07:47:02.982341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:03.552 [2024-10-07 07:47:02.982996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:03.812 137.33 IOPS, 412.00 MiB/s [2024-10-07 07:47:03.202801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:03.812 [2024-10-07 07:47:03.203163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:04.071 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.072 "name": "raid_bdev1", 00:26:04.072 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:04.072 "strip_size_kb": 0, 00:26:04.072 "state": "online", 00:26:04.072 "raid_level": "raid1", 00:26:04.072 "superblock": false, 00:26:04.072 "num_base_bdevs": 4, 00:26:04.072 "num_base_bdevs_discovered": 4, 00:26:04.072 "num_base_bdevs_operational": 4, 00:26:04.072 "process": { 00:26:04.072 "type": "rebuild", 00:26:04.072 "target": "spare", 00:26:04.072 "progress": { 00:26:04.072 "blocks": 12288, 00:26:04.072 "percent": 18 00:26:04.072 } 00:26:04.072 }, 00:26:04.072 "base_bdevs_list": [ 00:26:04.072 { 00:26:04.072 "name": "spare", 00:26:04.072 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:04.072 "is_configured": true, 00:26:04.072 "data_offset": 0, 00:26:04.072 "data_size": 65536 00:26:04.072 }, 00:26:04.072 { 00:26:04.072 "name": "BaseBdev2", 00:26:04.072 "uuid": "580806de-534b-5a10-ad62-cc3d2bca6254", 00:26:04.072 "is_configured": true, 00:26:04.072 "data_offset": 0, 00:26:04.072 "data_size": 65536 00:26:04.072 }, 00:26:04.072 { 00:26:04.072 "name": "BaseBdev3", 00:26:04.072 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:04.072 "is_configured": true, 00:26:04.072 "data_offset": 0, 00:26:04.072 "data_size": 65536 00:26:04.072 }, 00:26:04.072 { 00:26:04.072 "name": "BaseBdev4", 00:26:04.072 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:04.072 "is_configured": true, 00:26:04.072 "data_offset": 0, 00:26:04.072 "data_size": 65536 00:26:04.072 } 00:26:04.072 ] 00:26:04.072 }' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:04.072 [2024-10-07 07:47:03.550383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:04.072 [2024-10-07 07:47:03.568974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:04.072 [2024-10-07 07:47:03.608197] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:26:04.072 [2024-10-07 07:47:03.608249] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:04.072 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:04.334 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:04.334 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.334 "name": "raid_bdev1", 00:26:04.334 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:04.334 "strip_size_kb": 0, 00:26:04.334 "state": "online", 00:26:04.334 "raid_level": "raid1", 00:26:04.334 "superblock": false, 00:26:04.334 "num_base_bdevs": 4, 00:26:04.334 "num_base_bdevs_discovered": 3, 00:26:04.334 "num_base_bdevs_operational": 3, 00:26:04.334 "process": { 00:26:04.334 "type": "rebuild", 00:26:04.334 "target": "spare", 00:26:04.334 "progress": { 00:26:04.334 "blocks": 16384, 00:26:04.334 "percent": 25 00:26:04.334 } 00:26:04.334 }, 00:26:04.334 "base_bdevs_list": [ 00:26:04.334 { 00:26:04.334 "name": "spare", 00:26:04.334 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:04.334 "is_configured": true, 00:26:04.334 "data_offset": 0, 00:26:04.334 "data_size": 65536 00:26:04.334 }, 00:26:04.334 { 00:26:04.334 "name": null, 00:26:04.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.334 "is_configured": false, 00:26:04.334 "data_offset": 0, 00:26:04.335 "data_size": 65536 00:26:04.335 }, 00:26:04.335 { 00:26:04.335 "name": "BaseBdev3", 00:26:04.335 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:04.335 "is_configured": true, 00:26:04.335 "data_offset": 0, 00:26:04.335 "data_size": 65536 00:26:04.335 }, 00:26:04.335 { 00:26:04.335 "name": "BaseBdev4", 00:26:04.335 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:04.335 "is_configured": true, 00:26:04.335 "data_offset": 0, 00:26:04.335 "data_size": 65536 00:26:04.335 } 00:26:04.335 ] 00:26:04.335 }' 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=513 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:04.335 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.335 "name": "raid_bdev1", 00:26:04.335 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:04.335 "strip_size_kb": 0, 00:26:04.335 "state": "online", 00:26:04.335 "raid_level": "raid1", 00:26:04.335 "superblock": false, 00:26:04.335 "num_base_bdevs": 4, 00:26:04.335 "num_base_bdevs_discovered": 3, 00:26:04.335 "num_base_bdevs_operational": 3, 00:26:04.335 "process": { 00:26:04.336 "type": "rebuild", 00:26:04.336 "target": "spare", 00:26:04.336 "progress": { 00:26:04.336 "blocks": 18432, 00:26:04.336 "percent": 28 00:26:04.336 } 00:26:04.336 }, 00:26:04.336 "base_bdevs_list": [ 00:26:04.336 { 00:26:04.336 "name": "spare", 00:26:04.336 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:04.336 "is_configured": true, 00:26:04.336 "data_offset": 0, 00:26:04.336 "data_size": 65536 00:26:04.336 }, 00:26:04.336 { 00:26:04.336 "name": null, 00:26:04.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.336 "is_configured": false, 00:26:04.336 "data_offset": 0, 00:26:04.336 "data_size": 65536 00:26:04.336 }, 00:26:04.336 { 00:26:04.336 "name": "BaseBdev3", 00:26:04.336 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:04.336 "is_configured": true, 00:26:04.336 "data_offset": 0, 00:26:04.336 "data_size": 65536 00:26:04.336 }, 00:26:04.336 { 00:26:04.336 "name": "BaseBdev4", 00:26:04.336 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:04.336 "is_configured": true, 00:26:04.336 "data_offset": 0, 00:26:04.336 "data_size": 65536 00:26:04.336 } 00:26:04.336 ] 00:26:04.336 }' 00:26:04.336 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.336 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.336 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.615 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.615 07:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:04.615 [2024-10-07 07:47:03.948236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:04.615 [2024-10-07 07:47:03.948597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:04.886 119.50 IOPS, 358.50 MiB/s [2024-10-07 07:47:04.392520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:05.453 "name": "raid_bdev1", 00:26:05.453 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:05.453 "strip_size_kb": 0, 00:26:05.453 "state": "online", 00:26:05.453 "raid_level": "raid1", 00:26:05.453 "superblock": false, 00:26:05.453 "num_base_bdevs": 4, 00:26:05.453 "num_base_bdevs_discovered": 3, 00:26:05.453 "num_base_bdevs_operational": 3, 00:26:05.453 "process": { 00:26:05.453 "type": "rebuild", 00:26:05.453 "target": "spare", 00:26:05.453 "progress": { 00:26:05.453 "blocks": 38912, 00:26:05.453 "percent": 59 00:26:05.453 } 00:26:05.453 }, 00:26:05.453 "base_bdevs_list": [ 00:26:05.453 { 00:26:05.453 "name": "spare", 00:26:05.453 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:05.453 "is_configured": true, 00:26:05.453 "data_offset": 0, 00:26:05.453 "data_size": 65536 00:26:05.453 }, 00:26:05.453 { 00:26:05.453 "name": null, 00:26:05.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.453 "is_configured": false, 00:26:05.453 "data_offset": 0, 00:26:05.453 "data_size": 65536 00:26:05.453 }, 00:26:05.453 { 00:26:05.453 "name": "BaseBdev3", 00:26:05.453 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:05.453 "is_configured": true, 00:26:05.453 "data_offset": 0, 00:26:05.453 "data_size": 65536 00:26:05.453 }, 00:26:05.453 { 00:26:05.453 "name": "BaseBdev4", 00:26:05.453 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:05.453 "is_configured": true, 00:26:05.453 "data_offset": 0, 00:26:05.453 "data_size": 65536 00:26:05.453 } 00:26:05.453 ] 00:26:05.453 }' 00:26:05.453 07:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:05.453 [2024-10-07 07:47:05.011955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:05.712 07:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:05.712 07:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:05.712 07:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:05.712 07:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:06.280 106.00 IOPS, 318.00 MiB/s [2024-10-07 07:47:05.685212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:06.280 [2024-10-07 07:47:05.793267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:06.539 07:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:06.797 07:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:06.797 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:06.797 "name": "raid_bdev1", 00:26:06.797 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:06.797 "strip_size_kb": 0, 00:26:06.797 "state": "online", 00:26:06.797 "raid_level": "raid1", 00:26:06.797 "superblock": false, 00:26:06.797 "num_base_bdevs": 4, 00:26:06.797 "num_base_bdevs_discovered": 3, 00:26:06.797 "num_base_bdevs_operational": 3, 00:26:06.797 "process": { 00:26:06.797 "type": "rebuild", 00:26:06.797 "target": "spare", 00:26:06.797 "progress": { 00:26:06.797 "blocks": 55296, 00:26:06.797 "percent": 84 00:26:06.797 } 00:26:06.797 }, 00:26:06.797 "base_bdevs_list": [ 00:26:06.797 { 00:26:06.797 "name": "spare", 00:26:06.797 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:06.797 "is_configured": true, 00:26:06.797 "data_offset": 0, 00:26:06.797 "data_size": 65536 00:26:06.797 }, 00:26:06.797 { 00:26:06.797 "name": null, 00:26:06.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.797 "is_configured": false, 00:26:06.797 "data_offset": 0, 00:26:06.797 "data_size": 65536 00:26:06.797 }, 00:26:06.797 { 00:26:06.797 "name": "BaseBdev3", 00:26:06.797 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:06.797 "is_configured": true, 00:26:06.797 "data_offset": 0, 00:26:06.797 "data_size": 65536 00:26:06.797 }, 00:26:06.797 { 00:26:06.797 "name": "BaseBdev4", 00:26:06.798 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:06.798 "is_configured": true, 00:26:06.798 "data_offset": 0, 00:26:06.798 "data_size": 65536 00:26:06.798 } 00:26:06.798 ] 00:26:06.798 }' 00:26:06.798 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:06.798 97.33 IOPS, 292.00 MiB/s 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:06.798 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:06.798 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:06.798 07:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:07.056 [2024-10-07 07:47:06.571733] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:07.315 [2024-10-07 07:47:06.677647] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:07.315 [2024-10-07 07:47:06.680554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.884 88.43 IOPS, 265.29 MiB/s 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:07.884 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:07.884 "name": "raid_bdev1", 00:26:07.884 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:07.884 "strip_size_kb": 0, 00:26:07.884 "state": "online", 00:26:07.884 "raid_level": "raid1", 00:26:07.884 "superblock": false, 00:26:07.884 "num_base_bdevs": 4, 00:26:07.884 "num_base_bdevs_discovered": 3, 00:26:07.884 "num_base_bdevs_operational": 3, 00:26:07.884 "base_bdevs_list": [ 00:26:07.884 { 00:26:07.884 "name": "spare", 00:26:07.884 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:07.884 "is_configured": true, 00:26:07.884 "data_offset": 0, 00:26:07.884 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": null, 00:26:07.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.885 "is_configured": false, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev3", 00:26:07.885 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev4", 00:26:07.885 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 } 00:26:07.885 ] 00:26:07.885 }' 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:07.885 "name": "raid_bdev1", 00:26:07.885 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:07.885 "strip_size_kb": 0, 00:26:07.885 "state": "online", 00:26:07.885 "raid_level": "raid1", 00:26:07.885 "superblock": false, 00:26:07.885 "num_base_bdevs": 4, 00:26:07.885 "num_base_bdevs_discovered": 3, 00:26:07.885 "num_base_bdevs_operational": 3, 00:26:07.885 "base_bdevs_list": [ 00:26:07.885 { 00:26:07.885 "name": "spare", 00:26:07.885 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": null, 00:26:07.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.885 "is_configured": false, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev3", 00:26:07.885 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 }, 00:26:07.885 { 00:26:07.885 "name": "BaseBdev4", 00:26:07.885 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:07.885 "is_configured": true, 00:26:07.885 "data_offset": 0, 00:26:07.885 "data_size": 65536 00:26:07.885 } 00:26:07.885 ] 00:26:07.885 }' 00:26:07.885 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.145 "name": "raid_bdev1", 00:26:08.145 "uuid": "361e162e-c2af-4aff-b675-106a13ff37a9", 00:26:08.145 "strip_size_kb": 0, 00:26:08.145 "state": "online", 00:26:08.145 "raid_level": "raid1", 00:26:08.145 "superblock": false, 00:26:08.145 "num_base_bdevs": 4, 00:26:08.145 "num_base_bdevs_discovered": 3, 00:26:08.145 "num_base_bdevs_operational": 3, 00:26:08.145 "base_bdevs_list": [ 00:26:08.145 { 00:26:08.145 "name": "spare", 00:26:08.145 "uuid": "8b25e13d-ebd2-56f7-bb51-544b22aaa7a9", 00:26:08.145 "is_configured": true, 00:26:08.145 "data_offset": 0, 00:26:08.145 "data_size": 65536 00:26:08.145 }, 00:26:08.145 { 00:26:08.145 "name": null, 00:26:08.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.145 "is_configured": false, 00:26:08.145 "data_offset": 0, 00:26:08.145 "data_size": 65536 00:26:08.145 }, 00:26:08.145 { 00:26:08.145 "name": "BaseBdev3", 00:26:08.145 "uuid": "565787c3-ddd4-5522-8e36-26c1991b7d71", 00:26:08.145 "is_configured": true, 00:26:08.145 "data_offset": 0, 00:26:08.145 "data_size": 65536 00:26:08.145 }, 00:26:08.145 { 00:26:08.145 "name": "BaseBdev4", 00:26:08.145 "uuid": "e639238e-8ade-5db9-a907-d214212e435b", 00:26:08.145 "is_configured": true, 00:26:08.145 "data_offset": 0, 00:26:08.145 "data_size": 65536 00:26:08.145 } 00:26:08.145 ] 00:26:08.145 }' 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.145 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 07:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:08.404 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:08.404 07:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 [2024-10-07 07:47:07.929951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:08.404 [2024-10-07 07:47:07.930099] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.662 00:26:08.662 Latency(us) 00:26:08.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.662 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:08.662 raid_bdev1 : 7.82 82.76 248.27 0.00 0.00 17555.59 335.48 110849.46 00:26:08.662 =================================================================================================================== 00:26:08.662 Total : 82.76 248.27 0.00 0.00 17555.59 335.48 110849.46 00:26:08.662 [2024-10-07 07:47:08.005272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.662 [2024-10-07 07:47:08.005322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.662 [2024-10-07 07:47:08.005432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.662 [2024-10-07 07:47:08.005445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:08.662 { 00:26:08.662 "results": [ 00:26:08.662 { 00:26:08.662 "job": "raid_bdev1", 00:26:08.662 "core_mask": "0x1", 00:26:08.662 "workload": "randrw", 00:26:08.662 "percentage": 50, 00:26:08.662 "status": "finished", 00:26:08.662 "queue_depth": 2, 00:26:08.662 "io_size": 3145728, 00:26:08.662 "runtime": 7.818241, 00:26:08.662 "iops": 82.75518751596427, 00:26:08.662 "mibps": 248.26556254789278, 00:26:08.662 "io_failed": 0, 00:26:08.662 "io_timeout": 0, 00:26:08.662 "avg_latency_us": 17555.5925634798, 00:26:08.662 "min_latency_us": 335.4819047619048, 00:26:08.662 "max_latency_us": 110849.46285714286 00:26:08.662 } 00:26:08.662 ], 00:26:08.662 "core_count": 1 00:26:08.662 } 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.662 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:26:08.921 /dev/nbd0 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local i 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # break 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.921 1+0 records in 00:26:08.921 1+0 records out 00:26:08.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498012 s, 8.2 MB/s 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # size=4096 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # return 0 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.921 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:09.180 /dev/nbd1 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local i 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # break 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:09.180 1+0 records in 00:26:09.180 1+0 records out 00:26:09.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640741 s, 6.4 MB/s 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # size=4096 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # return 0 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:09.180 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:09.438 07:47:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:09.697 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:09.957 /dev/nbd1 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local i 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # break 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:09.957 1+0 records in 00:26:09.957 1+0 records out 00:26:09.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624445 s, 6.6 MB/s 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # size=4096 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # return 0 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:09.957 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:10.216 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:10.476 07:47:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78979 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' -z 78979 ']' 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # kill -0 78979 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # uname 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 78979 00:26:10.735 killing process with pid 78979 00:26:10.735 Received shutdown signal, test time was about 9.987212 seconds 00:26:10.735 00:26:10.735 Latency(us) 00:26:10.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.735 =================================================================================================================== 00:26:10.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # echo 'killing process with pid 78979' 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # kill 78979 00:26:10.735 [2024-10-07 07:47:10.150881] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.735 07:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@977 -- # wait 78979 00:26:11.304 [2024-10-07 07:47:10.607342] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:26:12.679 00:26:12.679 real 0m14.015s 00:26:12.679 user 0m17.688s 00:26:12.679 sys 0m2.121s 00:26:12.679 ************************************ 00:26:12.679 END TEST raid_rebuild_test_io 00:26:12.679 ************************************ 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:12.679 07:47:12 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:26:12.679 07:47:12 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:26:12.679 07:47:12 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:12.679 07:47:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:12.679 ************************************ 00:26:12.679 START TEST raid_rebuild_test_sb_io 00:26:12.679 ************************************ 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 4 true true true 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79394 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79394 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # '[' -z 79394 ']' 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:12.679 07:47:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:12.938 [2024-10-07 07:47:12.287271] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:12.938 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:12.938 Zero copy mechanism will not be used. 00:26:12.938 [2024-10-07 07:47:12.287687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79394 ] 00:26:12.938 [2024-10-07 07:47:12.469731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.197 [2024-10-07 07:47:12.688855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.456 [2024-10-07 07:47:12.889839] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.456 [2024-10-07 07:47:12.890098] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.715 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # return 0 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.716 BaseBdev1_malloc 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.716 [2024-10-07 07:47:13.270194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:13.716 [2024-10-07 07:47:13.270406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.716 [2024-10-07 07:47:13.270479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:13.716 [2024-10-07 07:47:13.270501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.716 [2024-10-07 07:47:13.273232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.716 [2024-10-07 07:47:13.273402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:13.716 BaseBdev1 00:26:13.716 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 BaseBdev2_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 [2024-10-07 07:47:13.333450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:13.976 [2024-10-07 07:47:13.333638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.976 [2024-10-07 07:47:13.333670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:13.976 [2024-10-07 07:47:13.333685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.976 [2024-10-07 07:47:13.336110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.976 [2024-10-07 07:47:13.336250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:13.976 BaseBdev2 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 BaseBdev3_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 [2024-10-07 07:47:13.385064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:13.976 [2024-10-07 07:47:13.385131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.976 [2024-10-07 07:47:13.385160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:13.976 [2024-10-07 07:47:13.385176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.976 [2024-10-07 07:47:13.387631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.976 [2024-10-07 07:47:13.387675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:13.976 BaseBdev3 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 BaseBdev4_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 [2024-10-07 07:47:13.433981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:13.976 [2024-10-07 07:47:13.434048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.976 [2024-10-07 07:47:13.434071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:13.976 [2024-10-07 07:47:13.434087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.976 [2024-10-07 07:47:13.436680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.976 [2024-10-07 07:47:13.436745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:13.976 BaseBdev4 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 spare_malloc 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 spare_delay 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 [2024-10-07 07:47:13.497407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:13.976 [2024-10-07 07:47:13.497476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.976 [2024-10-07 07:47:13.497499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:13.976 [2024-10-07 07:47:13.497514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.976 [2024-10-07 07:47:13.500069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.976 [2024-10-07 07:47:13.500113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:13.976 spare 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.976 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 [2024-10-07 07:47:13.505464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.976 [2024-10-07 07:47:13.507496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.976 [2024-10-07 07:47:13.507567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.976 [2024-10-07 07:47:13.507619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:13.976 [2024-10-07 07:47:13.507814] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:13.977 [2024-10-07 07:47:13.507836] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:13.977 [2024-10-07 07:47:13.508114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:13.977 [2024-10-07 07:47:13.508299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:13.977 [2024-10-07 07:47:13.508316] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:13.977 [2024-10-07 07:47:13.508455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.977 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:14.236 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.236 "name": "raid_bdev1", 00:26:14.236 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:14.236 "strip_size_kb": 0, 00:26:14.236 "state": "online", 00:26:14.236 "raid_level": "raid1", 00:26:14.236 "superblock": true, 00:26:14.236 "num_base_bdevs": 4, 00:26:14.236 "num_base_bdevs_discovered": 4, 00:26:14.236 "num_base_bdevs_operational": 4, 00:26:14.236 "base_bdevs_list": [ 00:26:14.236 { 00:26:14.236 "name": "BaseBdev1", 00:26:14.236 "uuid": "42ea6cbd-7d2b-5a4e-9349-69f14c76e391", 00:26:14.236 "is_configured": true, 00:26:14.236 "data_offset": 2048, 00:26:14.236 "data_size": 63488 00:26:14.236 }, 00:26:14.236 { 00:26:14.236 "name": "BaseBdev2", 00:26:14.236 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:14.236 "is_configured": true, 00:26:14.236 "data_offset": 2048, 00:26:14.236 "data_size": 63488 00:26:14.236 }, 00:26:14.236 { 00:26:14.236 "name": "BaseBdev3", 00:26:14.236 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:14.236 "is_configured": true, 00:26:14.236 "data_offset": 2048, 00:26:14.236 "data_size": 63488 00:26:14.236 }, 00:26:14.236 { 00:26:14.236 "name": "BaseBdev4", 00:26:14.236 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:14.236 "is_configured": true, 00:26:14.236 "data_offset": 2048, 00:26:14.236 "data_size": 63488 00:26:14.236 } 00:26:14.236 ] 00:26:14.236 }' 00:26:14.236 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.236 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.496 [2024-10-07 07:47:13.941935] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:14.496 07:47:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.496 [2024-10-07 07:47:14.005584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.496 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:14.756 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.756 "name": "raid_bdev1", 00:26:14.756 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:14.756 "strip_size_kb": 0, 00:26:14.756 "state": "online", 00:26:14.756 "raid_level": "raid1", 00:26:14.756 "superblock": true, 00:26:14.756 "num_base_bdevs": 4, 00:26:14.756 "num_base_bdevs_discovered": 3, 00:26:14.756 "num_base_bdevs_operational": 3, 00:26:14.756 "base_bdevs_list": [ 00:26:14.756 { 00:26:14.756 "name": null, 00:26:14.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.756 "is_configured": false, 00:26:14.756 "data_offset": 0, 00:26:14.756 "data_size": 63488 00:26:14.756 }, 00:26:14.756 { 00:26:14.756 "name": "BaseBdev2", 00:26:14.756 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:14.756 "is_configured": true, 00:26:14.756 "data_offset": 2048, 00:26:14.756 "data_size": 63488 00:26:14.756 }, 00:26:14.756 { 00:26:14.756 "name": "BaseBdev3", 00:26:14.756 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:14.756 "is_configured": true, 00:26:14.756 "data_offset": 2048, 00:26:14.756 "data_size": 63488 00:26:14.756 }, 00:26:14.756 { 00:26:14.756 "name": "BaseBdev4", 00:26:14.756 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:14.756 "is_configured": true, 00:26:14.756 "data_offset": 2048, 00:26:14.756 "data_size": 63488 00:26:14.756 } 00:26:14.756 ] 00:26:14.756 }' 00:26:14.756 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.756 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.756 [2024-10-07 07:47:14.093915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:14.756 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:14.756 Zero copy mechanism will not be used. 00:26:14.756 Running I/O for 60 seconds... 00:26:15.015 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:15.015 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:15.015 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:15.015 [2024-10-07 07:47:14.478069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:15.015 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:15.015 07:47:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:15.015 [2024-10-07 07:47:14.541115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:15.015 [2024-10-07 07:47:14.543445] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:15.274 [2024-10-07 07:47:14.682299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:15.274 [2024-10-07 07:47:14.816676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:15.274 [2024-10-07 07:47:14.817016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:15.844 176.00 IOPS, 528.00 MiB/s [2024-10-07 07:47:15.158654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:15.844 [2024-10-07 07:47:15.306783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:15.844 [2024-10-07 07:47:15.313773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:16.103 "name": "raid_bdev1", 00:26:16.103 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:16.103 "strip_size_kb": 0, 00:26:16.103 "state": "online", 00:26:16.103 "raid_level": "raid1", 00:26:16.103 "superblock": true, 00:26:16.103 "num_base_bdevs": 4, 00:26:16.103 "num_base_bdevs_discovered": 4, 00:26:16.103 "num_base_bdevs_operational": 4, 00:26:16.103 "process": { 00:26:16.103 "type": "rebuild", 00:26:16.103 "target": "spare", 00:26:16.103 "progress": { 00:26:16.103 "blocks": 10240, 00:26:16.103 "percent": 16 00:26:16.103 } 00:26:16.103 }, 00:26:16.103 "base_bdevs_list": [ 00:26:16.103 { 00:26:16.103 "name": "spare", 00:26:16.103 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:16.103 "is_configured": true, 00:26:16.103 "data_offset": 2048, 00:26:16.103 "data_size": 63488 00:26:16.103 }, 00:26:16.103 { 00:26:16.103 "name": "BaseBdev2", 00:26:16.103 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:16.103 "is_configured": true, 00:26:16.103 "data_offset": 2048, 00:26:16.103 "data_size": 63488 00:26:16.103 }, 00:26:16.103 { 00:26:16.103 "name": "BaseBdev3", 00:26:16.103 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:16.103 "is_configured": true, 00:26:16.103 "data_offset": 2048, 00:26:16.103 "data_size": 63488 00:26:16.103 }, 00:26:16.103 { 00:26:16.103 "name": "BaseBdev4", 00:26:16.103 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:16.103 "is_configured": true, 00:26:16.103 "data_offset": 2048, 00:26:16.103 "data_size": 63488 00:26:16.103 } 00:26:16.103 ] 00:26:16.103 }' 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:16.103 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.363 [2024-10-07 07:47:15.666475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:16.363 [2024-10-07 07:47:15.668786] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:16.363 [2024-10-07 07:47:15.769949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:16.363 [2024-10-07 07:47:15.770399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:16.363 [2024-10-07 07:47:15.771404] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:16.363 [2024-10-07 07:47:15.780612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.363 [2024-10-07 07:47:15.780666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:16.363 [2024-10-07 07:47:15.780681] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:16.363 [2024-10-07 07:47:15.804319] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.363 "name": "raid_bdev1", 00:26:16.363 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:16.363 "strip_size_kb": 0, 00:26:16.363 "state": "online", 00:26:16.363 "raid_level": "raid1", 00:26:16.363 "superblock": true, 00:26:16.363 "num_base_bdevs": 4, 00:26:16.363 "num_base_bdevs_discovered": 3, 00:26:16.363 "num_base_bdevs_operational": 3, 00:26:16.363 "base_bdevs_list": [ 00:26:16.363 { 00:26:16.363 "name": null, 00:26:16.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.363 "is_configured": false, 00:26:16.363 "data_offset": 0, 00:26:16.363 "data_size": 63488 00:26:16.363 }, 00:26:16.363 { 00:26:16.363 "name": "BaseBdev2", 00:26:16.363 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:16.363 "is_configured": true, 00:26:16.363 "data_offset": 2048, 00:26:16.363 "data_size": 63488 00:26:16.363 }, 00:26:16.363 { 00:26:16.363 "name": "BaseBdev3", 00:26:16.363 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:16.363 "is_configured": true, 00:26:16.363 "data_offset": 2048, 00:26:16.363 "data_size": 63488 00:26:16.363 }, 00:26:16.363 { 00:26:16.363 "name": "BaseBdev4", 00:26:16.363 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:16.363 "is_configured": true, 00:26:16.363 "data_offset": 2048, 00:26:16.363 "data_size": 63488 00:26:16.363 } 00:26:16.363 ] 00:26:16.363 }' 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.363 07:47:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 166.00 IOPS, 498.00 MiB/s 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:16.882 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:16.883 "name": "raid_bdev1", 00:26:16.883 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:16.883 "strip_size_kb": 0, 00:26:16.883 "state": "online", 00:26:16.883 "raid_level": "raid1", 00:26:16.883 "superblock": true, 00:26:16.883 "num_base_bdevs": 4, 00:26:16.883 "num_base_bdevs_discovered": 3, 00:26:16.883 "num_base_bdevs_operational": 3, 00:26:16.883 "base_bdevs_list": [ 00:26:16.883 { 00:26:16.883 "name": null, 00:26:16.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.883 "is_configured": false, 00:26:16.883 "data_offset": 0, 00:26:16.883 "data_size": 63488 00:26:16.883 }, 00:26:16.883 { 00:26:16.883 "name": "BaseBdev2", 00:26:16.883 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:16.883 "is_configured": true, 00:26:16.883 "data_offset": 2048, 00:26:16.883 "data_size": 63488 00:26:16.883 }, 00:26:16.883 { 00:26:16.883 "name": "BaseBdev3", 00:26:16.883 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:16.883 "is_configured": true, 00:26:16.883 "data_offset": 2048, 00:26:16.883 "data_size": 63488 00:26:16.883 }, 00:26:16.883 { 00:26:16.883 "name": "BaseBdev4", 00:26:16.883 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:16.883 "is_configured": true, 00:26:16.883 "data_offset": 2048, 00:26:16.883 "data_size": 63488 00:26:16.883 } 00:26:16.883 ] 00:26:16.883 }' 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:16.883 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.883 [2024-10-07 07:47:16.424191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:17.142 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:17.142 07:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:17.142 [2024-10-07 07:47:16.470863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:17.142 [2024-10-07 07:47:16.473132] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:17.142 [2024-10-07 07:47:16.589835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:17.142 [2024-10-07 07:47:16.591239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:17.402 [2024-10-07 07:47:16.809424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:17.402 [2024-10-07 07:47:16.809766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:17.661 166.33 IOPS, 499.00 MiB/s [2024-10-07 07:47:17.149983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:17.661 [2024-10-07 07:47:17.150520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:17.920 [2024-10-07 07:47:17.283038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:17.920 [2024-10-07 07:47:17.283355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:17.920 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:18.180 "name": "raid_bdev1", 00:26:18.180 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:18.180 "strip_size_kb": 0, 00:26:18.180 "state": "online", 00:26:18.180 "raid_level": "raid1", 00:26:18.180 "superblock": true, 00:26:18.180 "num_base_bdevs": 4, 00:26:18.180 "num_base_bdevs_discovered": 4, 00:26:18.180 "num_base_bdevs_operational": 4, 00:26:18.180 "process": { 00:26:18.180 "type": "rebuild", 00:26:18.180 "target": "spare", 00:26:18.180 "progress": { 00:26:18.180 "blocks": 10240, 00:26:18.180 "percent": 16 00:26:18.180 } 00:26:18.180 }, 00:26:18.180 "base_bdevs_list": [ 00:26:18.180 { 00:26:18.180 "name": "spare", 00:26:18.180 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:18.180 "is_configured": true, 00:26:18.180 "data_offset": 2048, 00:26:18.180 "data_size": 63488 00:26:18.180 }, 00:26:18.180 { 00:26:18.180 "name": "BaseBdev2", 00:26:18.180 "uuid": "1f85e88f-2356-5e5c-8945-895cb863d779", 00:26:18.180 "is_configured": true, 00:26:18.180 "data_offset": 2048, 00:26:18.180 "data_size": 63488 00:26:18.180 }, 00:26:18.180 { 00:26:18.180 "name": "BaseBdev3", 00:26:18.180 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:18.180 "is_configured": true, 00:26:18.180 "data_offset": 2048, 00:26:18.180 "data_size": 63488 00:26:18.180 }, 00:26:18.180 { 00:26:18.180 "name": "BaseBdev4", 00:26:18.180 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:18.180 "is_configured": true, 00:26:18.180 "data_offset": 2048, 00:26:18.180 "data_size": 63488 00:26:18.180 } 00:26:18.180 ] 00:26:18.180 }' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:18.180 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:18.180 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:18.180 [2024-10-07 07:47:17.614257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:18.439 [2024-10-07 07:47:17.829825] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:26:18.439 [2024-10-07 07:47:17.829883] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:18.439 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:18.440 "name": "raid_bdev1", 00:26:18.440 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:18.440 "strip_size_kb": 0, 00:26:18.440 "state": "online", 00:26:18.440 "raid_level": "raid1", 00:26:18.440 "superblock": true, 00:26:18.440 "num_base_bdevs": 4, 00:26:18.440 "num_base_bdevs_discovered": 3, 00:26:18.440 "num_base_bdevs_operational": 3, 00:26:18.440 "process": { 00:26:18.440 "type": "rebuild", 00:26:18.440 "target": "spare", 00:26:18.440 "progress": { 00:26:18.440 "blocks": 14336, 00:26:18.440 "percent": 22 00:26:18.440 } 00:26:18.440 }, 00:26:18.440 "base_bdevs_list": [ 00:26:18.440 { 00:26:18.440 "name": "spare", 00:26:18.440 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:18.440 "is_configured": true, 00:26:18.440 "data_offset": 2048, 00:26:18.440 "data_size": 63488 00:26:18.440 }, 00:26:18.440 { 00:26:18.440 "name": null, 00:26:18.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.440 "is_configured": false, 00:26:18.440 "data_offset": 0, 00:26:18.440 "data_size": 63488 00:26:18.440 }, 00:26:18.440 { 00:26:18.440 "name": "BaseBdev3", 00:26:18.440 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:18.440 "is_configured": true, 00:26:18.440 "data_offset": 2048, 00:26:18.440 "data_size": 63488 00:26:18.440 }, 00:26:18.440 { 00:26:18.440 "name": "BaseBdev4", 00:26:18.440 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:18.440 "is_configured": true, 00:26:18.440 "data_offset": 2048, 00:26:18.440 "data_size": 63488 00:26:18.440 } 00:26:18.440 ] 00:26:18.440 }' 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:18.440 07:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.699 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:18.699 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:18.699 "name": "raid_bdev1", 00:26:18.699 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:18.699 "strip_size_kb": 0, 00:26:18.699 "state": "online", 00:26:18.699 "raid_level": "raid1", 00:26:18.699 "superblock": true, 00:26:18.699 "num_base_bdevs": 4, 00:26:18.699 "num_base_bdevs_discovered": 3, 00:26:18.699 "num_base_bdevs_operational": 3, 00:26:18.699 "process": { 00:26:18.699 "type": "rebuild", 00:26:18.699 "target": "spare", 00:26:18.699 "progress": { 00:26:18.699 "blocks": 16384, 00:26:18.699 "percent": 25 00:26:18.699 } 00:26:18.699 }, 00:26:18.699 "base_bdevs_list": [ 00:26:18.699 { 00:26:18.699 "name": "spare", 00:26:18.699 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:18.699 "is_configured": true, 00:26:18.699 "data_offset": 2048, 00:26:18.699 "data_size": 63488 00:26:18.699 }, 00:26:18.699 { 00:26:18.699 "name": null, 00:26:18.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.700 "is_configured": false, 00:26:18.700 "data_offset": 0, 00:26:18.700 "data_size": 63488 00:26:18.700 }, 00:26:18.700 { 00:26:18.700 "name": "BaseBdev3", 00:26:18.700 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:18.700 "is_configured": true, 00:26:18.700 "data_offset": 2048, 00:26:18.700 "data_size": 63488 00:26:18.700 }, 00:26:18.700 { 00:26:18.700 "name": "BaseBdev4", 00:26:18.700 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:18.700 "is_configured": true, 00:26:18.700 "data_offset": 2048, 00:26:18.700 "data_size": 63488 00:26:18.700 } 00:26:18.700 ] 00:26:18.700 }' 00:26:18.700 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:18.700 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.700 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:18.700 140.00 IOPS, 420.00 MiB/s 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.700 07:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:18.700 [2024-10-07 07:47:18.182790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:18.959 [2024-10-07 07:47:18.305554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:18.959 [2024-10-07 07:47:18.306068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:19.217 [2024-10-07 07:47:18.751650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:19.476 [2024-10-07 07:47:18.994130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:26:19.775 124.60 IOPS, 373.80 MiB/s 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:19.775 "name": "raid_bdev1", 00:26:19.775 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:19.775 "strip_size_kb": 0, 00:26:19.775 "state": "online", 00:26:19.775 "raid_level": "raid1", 00:26:19.775 "superblock": true, 00:26:19.775 "num_base_bdevs": 4, 00:26:19.775 "num_base_bdevs_discovered": 3, 00:26:19.775 "num_base_bdevs_operational": 3, 00:26:19.775 "process": { 00:26:19.775 "type": "rebuild", 00:26:19.775 "target": "spare", 00:26:19.775 "progress": { 00:26:19.775 "blocks": 32768, 00:26:19.775 "percent": 51 00:26:19.775 } 00:26:19.775 }, 00:26:19.775 "base_bdevs_list": [ 00:26:19.775 { 00:26:19.775 "name": "spare", 00:26:19.775 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:19.775 "is_configured": true, 00:26:19.775 "data_offset": 2048, 00:26:19.775 "data_size": 63488 00:26:19.775 }, 00:26:19.775 { 00:26:19.775 "name": null, 00:26:19.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.775 "is_configured": false, 00:26:19.775 "data_offset": 0, 00:26:19.775 "data_size": 63488 00:26:19.775 }, 00:26:19.775 { 00:26:19.775 "name": "BaseBdev3", 00:26:19.775 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:19.775 "is_configured": true, 00:26:19.775 "data_offset": 2048, 00:26:19.775 "data_size": 63488 00:26:19.775 }, 00:26:19.775 { 00:26:19.775 "name": "BaseBdev4", 00:26:19.775 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:19.775 "is_configured": true, 00:26:19.775 "data_offset": 2048, 00:26:19.775 "data_size": 63488 00:26:19.775 } 00:26:19.775 ] 00:26:19.775 }' 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.775 07:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:20.055 [2024-10-07 07:47:19.521582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:20.055 [2024-10-07 07:47:19.522150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:20.879 112.00 IOPS, 336.00 MiB/s [2024-10-07 07:47:20.193371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:20.879 "name": "raid_bdev1", 00:26:20.879 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:20.879 "strip_size_kb": 0, 00:26:20.879 "state": "online", 00:26:20.879 "raid_level": "raid1", 00:26:20.879 "superblock": true, 00:26:20.879 "num_base_bdevs": 4, 00:26:20.879 "num_base_bdevs_discovered": 3, 00:26:20.879 "num_base_bdevs_operational": 3, 00:26:20.879 "process": { 00:26:20.879 "type": "rebuild", 00:26:20.879 "target": "spare", 00:26:20.879 "progress": { 00:26:20.879 "blocks": 51200, 00:26:20.879 "percent": 80 00:26:20.879 } 00:26:20.879 }, 00:26:20.879 "base_bdevs_list": [ 00:26:20.879 { 00:26:20.879 "name": "spare", 00:26:20.879 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:20.879 "is_configured": true, 00:26:20.879 "data_offset": 2048, 00:26:20.879 "data_size": 63488 00:26:20.879 }, 00:26:20.879 { 00:26:20.879 "name": null, 00:26:20.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.879 "is_configured": false, 00:26:20.879 "data_offset": 0, 00:26:20.879 "data_size": 63488 00:26:20.879 }, 00:26:20.879 { 00:26:20.879 "name": "BaseBdev3", 00:26:20.879 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:20.879 "is_configured": true, 00:26:20.879 "data_offset": 2048, 00:26:20.879 "data_size": 63488 00:26:20.879 }, 00:26:20.879 { 00:26:20.879 "name": "BaseBdev4", 00:26:20.879 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:20.879 "is_configured": true, 00:26:20.879 "data_offset": 2048, 00:26:20.879 "data_size": 63488 00:26:20.879 } 00:26:20.879 ] 00:26:20.879 }' 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:20.879 07:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:21.137 [2024-10-07 07:47:20.628384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:26:21.394 [2024-10-07 07:47:20.736183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:26:21.395 [2024-10-07 07:47:20.736801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:26:21.652 [2024-10-07 07:47:21.042332] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:21.652 [2024-10-07 07:47:21.068028] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:21.652 [2024-10-07 07:47:21.070847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.910 101.71 IOPS, 305.14 MiB/s 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:21.910 "name": "raid_bdev1", 00:26:21.910 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:21.910 "strip_size_kb": 0, 00:26:21.910 "state": "online", 00:26:21.910 "raid_level": "raid1", 00:26:21.910 "superblock": true, 00:26:21.910 "num_base_bdevs": 4, 00:26:21.910 "num_base_bdevs_discovered": 3, 00:26:21.910 "num_base_bdevs_operational": 3, 00:26:21.910 "base_bdevs_list": [ 00:26:21.910 { 00:26:21.910 "name": "spare", 00:26:21.910 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:21.910 "is_configured": true, 00:26:21.910 "data_offset": 2048, 00:26:21.910 "data_size": 63488 00:26:21.910 }, 00:26:21.910 { 00:26:21.910 "name": null, 00:26:21.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.910 "is_configured": false, 00:26:21.910 "data_offset": 0, 00:26:21.910 "data_size": 63488 00:26:21.910 }, 00:26:21.910 { 00:26:21.910 "name": "BaseBdev3", 00:26:21.910 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:21.910 "is_configured": true, 00:26:21.910 "data_offset": 2048, 00:26:21.910 "data_size": 63488 00:26:21.910 }, 00:26:21.910 { 00:26:21.910 "name": "BaseBdev4", 00:26:21.910 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:21.910 "is_configured": true, 00:26:21.910 "data_offset": 2048, 00:26:21.910 "data_size": 63488 00:26:21.910 } 00:26:21.910 ] 00:26:21.910 }' 00:26:21.910 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:22.168 "name": "raid_bdev1", 00:26:22.168 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:22.168 "strip_size_kb": 0, 00:26:22.168 "state": "online", 00:26:22.168 "raid_level": "raid1", 00:26:22.168 "superblock": true, 00:26:22.168 "num_base_bdevs": 4, 00:26:22.168 "num_base_bdevs_discovered": 3, 00:26:22.168 "num_base_bdevs_operational": 3, 00:26:22.168 "base_bdevs_list": [ 00:26:22.168 { 00:26:22.168 "name": "spare", 00:26:22.168 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:22.168 "is_configured": true, 00:26:22.168 "data_offset": 2048, 00:26:22.168 "data_size": 63488 00:26:22.168 }, 00:26:22.168 { 00:26:22.168 "name": null, 00:26:22.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.168 "is_configured": false, 00:26:22.168 "data_offset": 0, 00:26:22.168 "data_size": 63488 00:26:22.168 }, 00:26:22.168 { 00:26:22.168 "name": "BaseBdev3", 00:26:22.168 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:22.168 "is_configured": true, 00:26:22.168 "data_offset": 2048, 00:26:22.168 "data_size": 63488 00:26:22.168 }, 00:26:22.168 { 00:26:22.168 "name": "BaseBdev4", 00:26:22.168 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:22.168 "is_configured": true, 00:26:22.168 "data_offset": 2048, 00:26:22.168 "data_size": 63488 00:26:22.168 } 00:26:22.168 ] 00:26:22.168 }' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:22.168 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:22.427 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.427 "name": "raid_bdev1", 00:26:22.427 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:22.427 "strip_size_kb": 0, 00:26:22.427 "state": "online", 00:26:22.427 "raid_level": "raid1", 00:26:22.427 "superblock": true, 00:26:22.427 "num_base_bdevs": 4, 00:26:22.427 "num_base_bdevs_discovered": 3, 00:26:22.427 "num_base_bdevs_operational": 3, 00:26:22.427 "base_bdevs_list": [ 00:26:22.427 { 00:26:22.427 "name": "spare", 00:26:22.427 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:22.427 "is_configured": true, 00:26:22.427 "data_offset": 2048, 00:26:22.427 "data_size": 63488 00:26:22.427 }, 00:26:22.427 { 00:26:22.427 "name": null, 00:26:22.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.427 "is_configured": false, 00:26:22.427 "data_offset": 0, 00:26:22.427 "data_size": 63488 00:26:22.427 }, 00:26:22.427 { 00:26:22.427 "name": "BaseBdev3", 00:26:22.427 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:22.427 "is_configured": true, 00:26:22.427 "data_offset": 2048, 00:26:22.427 "data_size": 63488 00:26:22.427 }, 00:26:22.427 { 00:26:22.427 "name": "BaseBdev4", 00:26:22.427 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:22.427 "is_configured": true, 00:26:22.427 "data_offset": 2048, 00:26:22.427 "data_size": 63488 00:26:22.427 } 00:26:22.427 ] 00:26:22.427 }' 00:26:22.427 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.427 07:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:22.687 92.88 IOPS, 278.62 MiB/s 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:22.687 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:22.687 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:22.687 [2024-10-07 07:47:22.159727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.687 [2024-10-07 07:47:22.159760] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.946 00:26:22.946 Latency(us) 00:26:22.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.946 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:22.946 raid_bdev1 : 8.16 91.55 274.65 0.00 0.00 14897.52 319.88 110849.46 00:26:22.946 =================================================================================================================== 00:26:22.946 Total : 91.55 274.65 0.00 0.00 14897.52 319.88 110849.46 00:26:22.946 { 00:26:22.946 "results": [ 00:26:22.946 { 00:26:22.946 "job": "raid_bdev1", 00:26:22.946 "core_mask": "0x1", 00:26:22.946 "workload": "randrw", 00:26:22.946 "percentage": 50, 00:26:22.946 "status": "finished", 00:26:22.946 "queue_depth": 2, 00:26:22.946 "io_size": 3145728, 00:26:22.946 "runtime": 8.159413, 00:26:22.946 "iops": 91.55070346359474, 00:26:22.946 "mibps": 274.65211039078423, 00:26:22.946 "io_failed": 0, 00:26:22.946 "io_timeout": 0, 00:26:22.946 "avg_latency_us": 14897.515201121947, 00:26:22.946 "min_latency_us": 319.8780952380952, 00:26:22.946 "max_latency_us": 110849.46285714286 00:26:22.946 } 00:26:22.946 ], 00:26:22.946 "core_count": 1 00:26:22.946 } 00:26:22.946 [2024-10-07 07:47:22.278243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.946 [2024-10-07 07:47:22.278291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.946 [2024-10-07 07:47:22.278386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.946 [2024-10-07 07:47:22.278403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.946 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:26:23.205 /dev/nbd0 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local i 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # break 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:23.205 1+0 records in 00:26:23.205 1+0 records out 00:26:23.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264272 s, 15.5 MB/s 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # size=4096 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # return 0 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.205 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:23.464 /dev/nbd1 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local i 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # break 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:23.464 1+0 records in 00:26:23.464 1+0 records out 00:26:23.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436541 s, 9.4 MB/s 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # size=4096 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # return 0 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.464 07:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.722 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:23.980 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:24.238 /dev/nbd1 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local i 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # break 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:24.238 1+0 records in 00:26:24.238 1+0 records out 00:26:24.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268528 s, 15.3 MB/s 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # size=4096 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:24.238 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:26:24.239 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # return 0 00:26:24.239 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:24.239 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:24.239 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:24.497 07:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:24.755 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.014 [2024-10-07 07:47:24.478826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:25.014 [2024-10-07 07:47:24.478897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.014 [2024-10-07 07:47:24.478924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:25.014 [2024-10-07 07:47:24.478939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.014 [2024-10-07 07:47:24.481824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.014 [2024-10-07 07:47:24.481991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:25.014 [2024-10-07 07:47:24.482110] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:25.014 [2024-10-07 07:47:24.482179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.014 [2024-10-07 07:47:24.482344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.014 [2024-10-07 07:47:24.482443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:25.014 spare 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.014 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.273 [2024-10-07 07:47:24.582539] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:25.273 [2024-10-07 07:47:24.582814] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:25.273 [2024-10-07 07:47:24.583215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:26:25.273 [2024-10-07 07:47:24.583538] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:25.273 [2024-10-07 07:47:24.583664] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:25.273 [2024-10-07 07:47:24.583919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.273 "name": "raid_bdev1", 00:26:25.273 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:25.273 "strip_size_kb": 0, 00:26:25.273 "state": "online", 00:26:25.273 "raid_level": "raid1", 00:26:25.273 "superblock": true, 00:26:25.273 "num_base_bdevs": 4, 00:26:25.273 "num_base_bdevs_discovered": 3, 00:26:25.273 "num_base_bdevs_operational": 3, 00:26:25.273 "base_bdevs_list": [ 00:26:25.273 { 00:26:25.273 "name": "spare", 00:26:25.273 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:25.273 "is_configured": true, 00:26:25.273 "data_offset": 2048, 00:26:25.273 "data_size": 63488 00:26:25.273 }, 00:26:25.273 { 00:26:25.273 "name": null, 00:26:25.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.273 "is_configured": false, 00:26:25.273 "data_offset": 2048, 00:26:25.273 "data_size": 63488 00:26:25.273 }, 00:26:25.273 { 00:26:25.273 "name": "BaseBdev3", 00:26:25.273 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:25.273 "is_configured": true, 00:26:25.273 "data_offset": 2048, 00:26:25.273 "data_size": 63488 00:26:25.273 }, 00:26:25.273 { 00:26:25.273 "name": "BaseBdev4", 00:26:25.273 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:25.273 "is_configured": true, 00:26:25.273 "data_offset": 2048, 00:26:25.273 "data_size": 63488 00:26:25.273 } 00:26:25.273 ] 00:26:25.273 }' 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.273 07:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.532 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.793 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:25.793 "name": "raid_bdev1", 00:26:25.793 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:25.793 "strip_size_kb": 0, 00:26:25.793 "state": "online", 00:26:25.793 "raid_level": "raid1", 00:26:25.793 "superblock": true, 00:26:25.793 "num_base_bdevs": 4, 00:26:25.793 "num_base_bdevs_discovered": 3, 00:26:25.793 "num_base_bdevs_operational": 3, 00:26:25.793 "base_bdevs_list": [ 00:26:25.793 { 00:26:25.793 "name": "spare", 00:26:25.793 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:25.793 "is_configured": true, 00:26:25.793 "data_offset": 2048, 00:26:25.793 "data_size": 63488 00:26:25.793 }, 00:26:25.793 { 00:26:25.793 "name": null, 00:26:25.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.793 "is_configured": false, 00:26:25.793 "data_offset": 2048, 00:26:25.793 "data_size": 63488 00:26:25.793 }, 00:26:25.793 { 00:26:25.793 "name": "BaseBdev3", 00:26:25.793 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:25.793 "is_configured": true, 00:26:25.793 "data_offset": 2048, 00:26:25.794 "data_size": 63488 00:26:25.794 }, 00:26:25.794 { 00:26:25.794 "name": "BaseBdev4", 00:26:25.794 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:25.794 "is_configured": true, 00:26:25.794 "data_offset": 2048, 00:26:25.794 "data_size": 63488 00:26:25.794 } 00:26:25.794 ] 00:26:25.794 }' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.794 [2024-10-07 07:47:25.260113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.794 "name": "raid_bdev1", 00:26:25.794 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:25.794 "strip_size_kb": 0, 00:26:25.794 "state": "online", 00:26:25.794 "raid_level": "raid1", 00:26:25.794 "superblock": true, 00:26:25.794 "num_base_bdevs": 4, 00:26:25.794 "num_base_bdevs_discovered": 2, 00:26:25.794 "num_base_bdevs_operational": 2, 00:26:25.794 "base_bdevs_list": [ 00:26:25.794 { 00:26:25.794 "name": null, 00:26:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.794 "is_configured": false, 00:26:25.794 "data_offset": 0, 00:26:25.794 "data_size": 63488 00:26:25.794 }, 00:26:25.794 { 00:26:25.794 "name": null, 00:26:25.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.794 "is_configured": false, 00:26:25.794 "data_offset": 2048, 00:26:25.794 "data_size": 63488 00:26:25.794 }, 00:26:25.794 { 00:26:25.794 "name": "BaseBdev3", 00:26:25.794 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:25.794 "is_configured": true, 00:26:25.794 "data_offset": 2048, 00:26:25.794 "data_size": 63488 00:26:25.794 }, 00:26:25.794 { 00:26:25.794 "name": "BaseBdev4", 00:26:25.794 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:25.794 "is_configured": true, 00:26:25.794 "data_offset": 2048, 00:26:25.794 "data_size": 63488 00:26:25.794 } 00:26:25.794 ] 00:26:25.794 }' 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.794 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:26.378 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:26.379 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:26.379 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:26.379 [2024-10-07 07:47:25.740270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:26.379 [2024-10-07 07:47:25.740485] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:26.379 [2024-10-07 07:47:25.740518] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:26.379 [2024-10-07 07:47:25.740564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:26.379 [2024-10-07 07:47:25.756314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:26:26.379 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:26.379 07:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:26.379 [2024-10-07 07:47:25.758689] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:27.316 "name": "raid_bdev1", 00:26:27.316 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:27.316 "strip_size_kb": 0, 00:26:27.316 "state": "online", 00:26:27.316 "raid_level": "raid1", 00:26:27.316 "superblock": true, 00:26:27.316 "num_base_bdevs": 4, 00:26:27.316 "num_base_bdevs_discovered": 3, 00:26:27.316 "num_base_bdevs_operational": 3, 00:26:27.316 "process": { 00:26:27.316 "type": "rebuild", 00:26:27.316 "target": "spare", 00:26:27.316 "progress": { 00:26:27.316 "blocks": 20480, 00:26:27.316 "percent": 32 00:26:27.316 } 00:26:27.316 }, 00:26:27.316 "base_bdevs_list": [ 00:26:27.316 { 00:26:27.316 "name": "spare", 00:26:27.316 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:27.316 "is_configured": true, 00:26:27.316 "data_offset": 2048, 00:26:27.316 "data_size": 63488 00:26:27.316 }, 00:26:27.316 { 00:26:27.316 "name": null, 00:26:27.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.316 "is_configured": false, 00:26:27.316 "data_offset": 2048, 00:26:27.316 "data_size": 63488 00:26:27.316 }, 00:26:27.316 { 00:26:27.316 "name": "BaseBdev3", 00:26:27.316 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:27.316 "is_configured": true, 00:26:27.316 "data_offset": 2048, 00:26:27.316 "data_size": 63488 00:26:27.316 }, 00:26:27.316 { 00:26:27.316 "name": "BaseBdev4", 00:26:27.316 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:27.316 "is_configured": true, 00:26:27.316 "data_offset": 2048, 00:26:27.316 "data_size": 63488 00:26:27.316 } 00:26:27.316 ] 00:26:27.316 }' 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.316 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.576 [2024-10-07 07:47:26.904293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.576 [2024-10-07 07:47:26.966587] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:27.576 [2024-10-07 07:47:26.966917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.576 [2024-10-07 07:47:26.966940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.576 [2024-10-07 07:47:26.966954] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:27.576 07:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.576 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:27.576 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.576 "name": "raid_bdev1", 00:26:27.576 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:27.576 "strip_size_kb": 0, 00:26:27.576 "state": "online", 00:26:27.576 "raid_level": "raid1", 00:26:27.576 "superblock": true, 00:26:27.576 "num_base_bdevs": 4, 00:26:27.576 "num_base_bdevs_discovered": 2, 00:26:27.576 "num_base_bdevs_operational": 2, 00:26:27.576 "base_bdevs_list": [ 00:26:27.576 { 00:26:27.576 "name": null, 00:26:27.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.576 "is_configured": false, 00:26:27.576 "data_offset": 0, 00:26:27.576 "data_size": 63488 00:26:27.576 }, 00:26:27.576 { 00:26:27.576 "name": null, 00:26:27.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.576 "is_configured": false, 00:26:27.576 "data_offset": 2048, 00:26:27.576 "data_size": 63488 00:26:27.576 }, 00:26:27.576 { 00:26:27.576 "name": "BaseBdev3", 00:26:27.576 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:27.576 "is_configured": true, 00:26:27.576 "data_offset": 2048, 00:26:27.576 "data_size": 63488 00:26:27.576 }, 00:26:27.576 { 00:26:27.576 "name": "BaseBdev4", 00:26:27.576 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:27.576 "is_configured": true, 00:26:27.576 "data_offset": 2048, 00:26:27.576 "data_size": 63488 00:26:27.576 } 00:26:27.576 ] 00:26:27.576 }' 00:26:27.576 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.576 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:28.143 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:28.143 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:28.143 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:28.143 [2024-10-07 07:47:27.463792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:28.143 [2024-10-07 07:47:27.464006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.143 [2024-10-07 07:47:27.464045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:28.143 [2024-10-07 07:47:27.464062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.143 [2024-10-07 07:47:27.464630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.143 [2024-10-07 07:47:27.464658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:28.143 [2024-10-07 07:47:27.464789] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:28.143 [2024-10-07 07:47:27.464811] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:28.143 [2024-10-07 07:47:27.464824] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:28.143 [2024-10-07 07:47:27.464860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:28.143 spare 00:26:28.143 [2024-10-07 07:47:27.479908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:26:28.143 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:28.143 07:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:28.143 [2024-10-07 07:47:27.482158] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.079 "name": "raid_bdev1", 00:26:29.079 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:29.079 "strip_size_kb": 0, 00:26:29.079 "state": "online", 00:26:29.079 "raid_level": "raid1", 00:26:29.079 "superblock": true, 00:26:29.079 "num_base_bdevs": 4, 00:26:29.079 "num_base_bdevs_discovered": 3, 00:26:29.079 "num_base_bdevs_operational": 3, 00:26:29.079 "process": { 00:26:29.079 "type": "rebuild", 00:26:29.079 "target": "spare", 00:26:29.079 "progress": { 00:26:29.079 "blocks": 20480, 00:26:29.079 "percent": 32 00:26:29.079 } 00:26:29.079 }, 00:26:29.079 "base_bdevs_list": [ 00:26:29.079 { 00:26:29.079 "name": "spare", 00:26:29.079 "uuid": "f78bbc58-b6aa-57a1-b7e7-34d9e040dc1f", 00:26:29.079 "is_configured": true, 00:26:29.079 "data_offset": 2048, 00:26:29.079 "data_size": 63488 00:26:29.079 }, 00:26:29.079 { 00:26:29.079 "name": null, 00:26:29.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.079 "is_configured": false, 00:26:29.079 "data_offset": 2048, 00:26:29.079 "data_size": 63488 00:26:29.079 }, 00:26:29.079 { 00:26:29.079 "name": "BaseBdev3", 00:26:29.079 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:29.079 "is_configured": true, 00:26:29.079 "data_offset": 2048, 00:26:29.079 "data_size": 63488 00:26:29.079 }, 00:26:29.079 { 00:26:29.079 "name": "BaseBdev4", 00:26:29.079 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:29.079 "is_configured": true, 00:26:29.079 "data_offset": 2048, 00:26:29.079 "data_size": 63488 00:26:29.079 } 00:26:29.079 ] 00:26:29.079 }' 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.079 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 [2024-10-07 07:47:28.627958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:29.339 [2024-10-07 07:47:28.690180] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:29.339 [2024-10-07 07:47:28.690447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.339 [2024-10-07 07:47:28.690478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:29.339 [2024-10-07 07:47:28.690490] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.339 "name": "raid_bdev1", 00:26:29.339 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:29.339 "strip_size_kb": 0, 00:26:29.339 "state": "online", 00:26:29.339 "raid_level": "raid1", 00:26:29.339 "superblock": true, 00:26:29.339 "num_base_bdevs": 4, 00:26:29.339 "num_base_bdevs_discovered": 2, 00:26:29.339 "num_base_bdevs_operational": 2, 00:26:29.339 "base_bdevs_list": [ 00:26:29.339 { 00:26:29.339 "name": null, 00:26:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.339 "is_configured": false, 00:26:29.339 "data_offset": 0, 00:26:29.339 "data_size": 63488 00:26:29.339 }, 00:26:29.339 { 00:26:29.339 "name": null, 00:26:29.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.339 "is_configured": false, 00:26:29.339 "data_offset": 2048, 00:26:29.339 "data_size": 63488 00:26:29.339 }, 00:26:29.339 { 00:26:29.339 "name": "BaseBdev3", 00:26:29.339 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:29.339 "is_configured": true, 00:26:29.339 "data_offset": 2048, 00:26:29.339 "data_size": 63488 00:26:29.339 }, 00:26:29.339 { 00:26:29.339 "name": "BaseBdev4", 00:26:29.339 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:29.339 "is_configured": true, 00:26:29.339 "data_offset": 2048, 00:26:29.339 "data_size": 63488 00:26:29.339 } 00:26:29.339 ] 00:26:29.339 }' 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.339 07:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.599 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.600 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.859 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.859 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.859 "name": "raid_bdev1", 00:26:29.859 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:29.859 "strip_size_kb": 0, 00:26:29.859 "state": "online", 00:26:29.859 "raid_level": "raid1", 00:26:29.859 "superblock": true, 00:26:29.859 "num_base_bdevs": 4, 00:26:29.859 "num_base_bdevs_discovered": 2, 00:26:29.859 "num_base_bdevs_operational": 2, 00:26:29.859 "base_bdevs_list": [ 00:26:29.859 { 00:26:29.859 "name": null, 00:26:29.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.859 "is_configured": false, 00:26:29.859 "data_offset": 0, 00:26:29.859 "data_size": 63488 00:26:29.859 }, 00:26:29.859 { 00:26:29.859 "name": null, 00:26:29.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.859 "is_configured": false, 00:26:29.859 "data_offset": 2048, 00:26:29.859 "data_size": 63488 00:26:29.859 }, 00:26:29.859 { 00:26:29.859 "name": "BaseBdev3", 00:26:29.859 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:29.859 "is_configured": true, 00:26:29.859 "data_offset": 2048, 00:26:29.859 "data_size": 63488 00:26:29.859 }, 00:26:29.859 { 00:26:29.860 "name": "BaseBdev4", 00:26:29.860 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:29.860 "is_configured": true, 00:26:29.860 "data_offset": 2048, 00:26:29.860 "data_size": 63488 00:26:29.860 } 00:26:29.860 ] 00:26:29.860 }' 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.860 [2024-10-07 07:47:29.296100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:29.860 [2024-10-07 07:47:29.296288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.860 [2024-10-07 07:47:29.296353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:29.860 [2024-10-07 07:47:29.296368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.860 [2024-10-07 07:47:29.296923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.860 [2024-10-07 07:47:29.296946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:29.860 [2024-10-07 07:47:29.297037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:29.860 [2024-10-07 07:47:29.297053] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:29.860 [2024-10-07 07:47:29.297069] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:29.860 [2024-10-07 07:47:29.297081] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:29.860 BaseBdev1 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:29.860 07:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.798 "name": "raid_bdev1", 00:26:30.798 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:30.798 "strip_size_kb": 0, 00:26:30.798 "state": "online", 00:26:30.798 "raid_level": "raid1", 00:26:30.798 "superblock": true, 00:26:30.798 "num_base_bdevs": 4, 00:26:30.798 "num_base_bdevs_discovered": 2, 00:26:30.798 "num_base_bdevs_operational": 2, 00:26:30.798 "base_bdevs_list": [ 00:26:30.798 { 00:26:30.798 "name": null, 00:26:30.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.798 "is_configured": false, 00:26:30.798 "data_offset": 0, 00:26:30.798 "data_size": 63488 00:26:30.798 }, 00:26:30.798 { 00:26:30.798 "name": null, 00:26:30.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.798 "is_configured": false, 00:26:30.798 "data_offset": 2048, 00:26:30.798 "data_size": 63488 00:26:30.798 }, 00:26:30.798 { 00:26:30.798 "name": "BaseBdev3", 00:26:30.798 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:30.798 "is_configured": true, 00:26:30.798 "data_offset": 2048, 00:26:30.798 "data_size": 63488 00:26:30.798 }, 00:26:30.798 { 00:26:30.798 "name": "BaseBdev4", 00:26:30.798 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:30.798 "is_configured": true, 00:26:30.798 "data_offset": 2048, 00:26:30.798 "data_size": 63488 00:26:30.798 } 00:26:30.798 ] 00:26:30.798 }' 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.798 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:31.367 "name": "raid_bdev1", 00:26:31.367 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:31.367 "strip_size_kb": 0, 00:26:31.367 "state": "online", 00:26:31.367 "raid_level": "raid1", 00:26:31.367 "superblock": true, 00:26:31.367 "num_base_bdevs": 4, 00:26:31.367 "num_base_bdevs_discovered": 2, 00:26:31.367 "num_base_bdevs_operational": 2, 00:26:31.367 "base_bdevs_list": [ 00:26:31.367 { 00:26:31.367 "name": null, 00:26:31.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.367 "is_configured": false, 00:26:31.367 "data_offset": 0, 00:26:31.367 "data_size": 63488 00:26:31.367 }, 00:26:31.367 { 00:26:31.367 "name": null, 00:26:31.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.367 "is_configured": false, 00:26:31.367 "data_offset": 2048, 00:26:31.367 "data_size": 63488 00:26:31.367 }, 00:26:31.367 { 00:26:31.367 "name": "BaseBdev3", 00:26:31.367 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:31.367 "is_configured": true, 00:26:31.367 "data_offset": 2048, 00:26:31.367 "data_size": 63488 00:26:31.367 }, 00:26:31.367 { 00:26:31.367 "name": "BaseBdev4", 00:26:31.367 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:31.367 "is_configured": true, 00:26:31.367 "data_offset": 2048, 00:26:31.367 "data_size": 63488 00:26:31.367 } 00:26:31.367 ] 00:26:31.367 }' 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # local es=0 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:31.367 [2024-10-07 07:47:30.880774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:31.367 [2024-10-07 07:47:30.880955] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:31.367 [2024-10-07 07:47:30.880974] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:31.367 request: 00:26:31.367 { 00:26:31.367 "base_bdev": "BaseBdev1", 00:26:31.367 "raid_bdev": "raid_bdev1", 00:26:31.367 "method": "bdev_raid_add_base_bdev", 00:26:31.367 "req_id": 1 00:26:31.367 } 00:26:31.367 Got JSON-RPC error response 00:26:31.367 response: 00:26:31.367 { 00:26:31.367 "code": -22, 00:26:31.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:31.367 } 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@656 -- # es=1 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:26:31.367 07:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.744 "name": "raid_bdev1", 00:26:32.744 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:32.744 "strip_size_kb": 0, 00:26:32.744 "state": "online", 00:26:32.744 "raid_level": "raid1", 00:26:32.744 "superblock": true, 00:26:32.744 "num_base_bdevs": 4, 00:26:32.744 "num_base_bdevs_discovered": 2, 00:26:32.744 "num_base_bdevs_operational": 2, 00:26:32.744 "base_bdevs_list": [ 00:26:32.744 { 00:26:32.744 "name": null, 00:26:32.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.744 "is_configured": false, 00:26:32.744 "data_offset": 0, 00:26:32.744 "data_size": 63488 00:26:32.744 }, 00:26:32.744 { 00:26:32.744 "name": null, 00:26:32.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.744 "is_configured": false, 00:26:32.744 "data_offset": 2048, 00:26:32.744 "data_size": 63488 00:26:32.744 }, 00:26:32.744 { 00:26:32.744 "name": "BaseBdev3", 00:26:32.744 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:32.744 "is_configured": true, 00:26:32.744 "data_offset": 2048, 00:26:32.744 "data_size": 63488 00:26:32.744 }, 00:26:32.744 { 00:26:32.744 "name": "BaseBdev4", 00:26:32.744 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:32.744 "is_configured": true, 00:26:32.744 "data_offset": 2048, 00:26:32.744 "data_size": 63488 00:26:32.744 } 00:26:32.744 ] 00:26:32.744 }' 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.744 07:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:33.004 "name": "raid_bdev1", 00:26:33.004 "uuid": "e9736773-3a62-481f-a9e5-8745d2d627c4", 00:26:33.004 "strip_size_kb": 0, 00:26:33.004 "state": "online", 00:26:33.004 "raid_level": "raid1", 00:26:33.004 "superblock": true, 00:26:33.004 "num_base_bdevs": 4, 00:26:33.004 "num_base_bdevs_discovered": 2, 00:26:33.004 "num_base_bdevs_operational": 2, 00:26:33.004 "base_bdevs_list": [ 00:26:33.004 { 00:26:33.004 "name": null, 00:26:33.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.004 "is_configured": false, 00:26:33.004 "data_offset": 0, 00:26:33.004 "data_size": 63488 00:26:33.004 }, 00:26:33.004 { 00:26:33.004 "name": null, 00:26:33.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.004 "is_configured": false, 00:26:33.004 "data_offset": 2048, 00:26:33.004 "data_size": 63488 00:26:33.004 }, 00:26:33.004 { 00:26:33.004 "name": "BaseBdev3", 00:26:33.004 "uuid": "513eef9c-bd03-55e3-b822-a59c56f2e7a2", 00:26:33.004 "is_configured": true, 00:26:33.004 "data_offset": 2048, 00:26:33.004 "data_size": 63488 00:26:33.004 }, 00:26:33.004 { 00:26:33.004 "name": "BaseBdev4", 00:26:33.004 "uuid": "dcb2c776-3a84-5a25-b7a7-a87dcb15fd70", 00:26:33.004 "is_configured": true, 00:26:33.004 "data_offset": 2048, 00:26:33.004 "data_size": 63488 00:26:33.004 } 00:26:33.004 ] 00:26:33.004 }' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79394 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' -z 79394 ']' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # kill -0 79394 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # uname 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 79394 00:26:33.004 killing process with pid 79394 00:26:33.004 Received shutdown signal, test time was about 18.409233 seconds 00:26:33.004 00:26:33.004 Latency(us) 00:26:33.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.004 =================================================================================================================== 00:26:33.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # echo 'killing process with pid 79394' 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # kill 79394 00:26:33.004 07:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@977 -- # wait 79394 00:26:33.004 [2024-10-07 07:47:32.505558] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:33.004 [2024-10-07 07:47:32.505703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.004 [2024-10-07 07:47:32.505789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.004 [2024-10-07 07:47:32.505806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:33.573 [2024-10-07 07:47:32.942437] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:34.952 07:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:26:34.952 00:26:34.952 real 0m22.167s 00:26:34.952 user 0m29.037s 00:26:34.952 sys 0m2.934s 00:26:34.952 07:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:34.952 07:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:34.952 ************************************ 00:26:34.952 END TEST raid_rebuild_test_sb_io 00:26:34.952 ************************************ 00:26:34.952 07:47:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:26:34.952 07:47:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:26:34.952 07:47:34 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:26:34.952 07:47:34 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:34.952 07:47:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.952 ************************************ 00:26:34.952 START TEST raid5f_state_function_test 00:26:34.952 ************************************ 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid5f 3 false 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:34.952 Process raid pid: 80121 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80121 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80121' 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80121 00:26:34.952 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 80121 ']' 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:34.953 07:47:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.953 [2024-10-07 07:47:34.487497] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:34.953 [2024-10-07 07:47:34.487628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.212 [2024-10-07 07:47:34.652438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.471 [2024-10-07 07:47:34.874124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.731 [2024-10-07 07:47:35.100740] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.731 [2024-10-07 07:47:35.100782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.991 [2024-10-07 07:47:35.340701] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:35.991 [2024-10-07 07:47:35.340910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:35.991 [2024-10-07 07:47:35.341028] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:35.991 [2024-10-07 07:47:35.341080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:35.991 [2024-10-07 07:47:35.341115] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:35.991 [2024-10-07 07:47:35.341208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:35.991 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.991 "name": "Existed_Raid", 00:26:35.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.991 "strip_size_kb": 64, 00:26:35.991 "state": "configuring", 00:26:35.991 "raid_level": "raid5f", 00:26:35.991 "superblock": false, 00:26:35.992 "num_base_bdevs": 3, 00:26:35.992 "num_base_bdevs_discovered": 0, 00:26:35.992 "num_base_bdevs_operational": 3, 00:26:35.992 "base_bdevs_list": [ 00:26:35.992 { 00:26:35.992 "name": "BaseBdev1", 00:26:35.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.992 "is_configured": false, 00:26:35.992 "data_offset": 0, 00:26:35.992 "data_size": 0 00:26:35.992 }, 00:26:35.992 { 00:26:35.992 "name": "BaseBdev2", 00:26:35.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.992 "is_configured": false, 00:26:35.992 "data_offset": 0, 00:26:35.992 "data_size": 0 00:26:35.992 }, 00:26:35.992 { 00:26:35.992 "name": "BaseBdev3", 00:26:35.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.992 "is_configured": false, 00:26:35.992 "data_offset": 0, 00:26:35.992 "data_size": 0 00:26:35.992 } 00:26:35.992 ] 00:26:35.992 }' 00:26:35.992 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.992 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.251 [2024-10-07 07:47:35.776746] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:36.251 [2024-10-07 07:47:35.776942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.251 [2024-10-07 07:47:35.784763] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:36.251 [2024-10-07 07:47:35.784816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:36.251 [2024-10-07 07:47:35.784828] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:36.251 [2024-10-07 07:47:35.784843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:36.251 [2024-10-07 07:47:35.784853] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:36.251 [2024-10-07 07:47:35.784868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.251 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.511 [2024-10-07 07:47:35.844471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.511 BaseBdev1 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.511 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.511 [ 00:26:36.511 { 00:26:36.511 "name": "BaseBdev1", 00:26:36.511 "aliases": [ 00:26:36.511 "b763733a-54d8-4720-85b8-01f0fe6149cd" 00:26:36.511 ], 00:26:36.511 "product_name": "Malloc disk", 00:26:36.511 "block_size": 512, 00:26:36.511 "num_blocks": 65536, 00:26:36.511 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:36.511 "assigned_rate_limits": { 00:26:36.511 "rw_ios_per_sec": 0, 00:26:36.511 "rw_mbytes_per_sec": 0, 00:26:36.511 "r_mbytes_per_sec": 0, 00:26:36.511 "w_mbytes_per_sec": 0 00:26:36.511 }, 00:26:36.511 "claimed": true, 00:26:36.511 "claim_type": "exclusive_write", 00:26:36.511 "zoned": false, 00:26:36.511 "supported_io_types": { 00:26:36.511 "read": true, 00:26:36.511 "write": true, 00:26:36.511 "unmap": true, 00:26:36.511 "flush": true, 00:26:36.511 "reset": true, 00:26:36.511 "nvme_admin": false, 00:26:36.511 "nvme_io": false, 00:26:36.511 "nvme_io_md": false, 00:26:36.511 "write_zeroes": true, 00:26:36.511 "zcopy": true, 00:26:36.511 "get_zone_info": false, 00:26:36.511 "zone_management": false, 00:26:36.512 "zone_append": false, 00:26:36.512 "compare": false, 00:26:36.512 "compare_and_write": false, 00:26:36.512 "abort": true, 00:26:36.512 "seek_hole": false, 00:26:36.512 "seek_data": false, 00:26:36.512 "copy": true, 00:26:36.512 "nvme_iov_md": false 00:26:36.512 }, 00:26:36.512 "memory_domains": [ 00:26:36.512 { 00:26:36.512 "dma_device_id": "system", 00:26:36.512 "dma_device_type": 1 00:26:36.512 }, 00:26:36.512 { 00:26:36.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.512 "dma_device_type": 2 00:26:36.512 } 00:26:36.512 ], 00:26:36.512 "driver_specific": {} 00:26:36.512 } 00:26:36.512 ] 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.512 "name": "Existed_Raid", 00:26:36.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.512 "strip_size_kb": 64, 00:26:36.512 "state": "configuring", 00:26:36.512 "raid_level": "raid5f", 00:26:36.512 "superblock": false, 00:26:36.512 "num_base_bdevs": 3, 00:26:36.512 "num_base_bdevs_discovered": 1, 00:26:36.512 "num_base_bdevs_operational": 3, 00:26:36.512 "base_bdevs_list": [ 00:26:36.512 { 00:26:36.512 "name": "BaseBdev1", 00:26:36.512 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:36.512 "is_configured": true, 00:26:36.512 "data_offset": 0, 00:26:36.512 "data_size": 65536 00:26:36.512 }, 00:26:36.512 { 00:26:36.512 "name": "BaseBdev2", 00:26:36.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.512 "is_configured": false, 00:26:36.512 "data_offset": 0, 00:26:36.512 "data_size": 0 00:26:36.512 }, 00:26:36.512 { 00:26:36.512 "name": "BaseBdev3", 00:26:36.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.512 "is_configured": false, 00:26:36.512 "data_offset": 0, 00:26:36.512 "data_size": 0 00:26:36.512 } 00:26:36.512 ] 00:26:36.512 }' 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.512 07:47:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.771 [2024-10-07 07:47:36.316643] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:36.771 [2024-10-07 07:47:36.316701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:36.771 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.771 [2024-10-07 07:47:36.328752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:37.031 [2024-10-07 07:47:36.331112] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:37.031 [2024-10-07 07:47:36.331172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:37.031 [2024-10-07 07:47:36.331185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:37.031 [2024-10-07 07:47:36.331199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.031 "name": "Existed_Raid", 00:26:37.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.031 "strip_size_kb": 64, 00:26:37.031 "state": "configuring", 00:26:37.031 "raid_level": "raid5f", 00:26:37.031 "superblock": false, 00:26:37.031 "num_base_bdevs": 3, 00:26:37.031 "num_base_bdevs_discovered": 1, 00:26:37.031 "num_base_bdevs_operational": 3, 00:26:37.031 "base_bdevs_list": [ 00:26:37.031 { 00:26:37.031 "name": "BaseBdev1", 00:26:37.031 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:37.031 "is_configured": true, 00:26:37.031 "data_offset": 0, 00:26:37.031 "data_size": 65536 00:26:37.031 }, 00:26:37.031 { 00:26:37.031 "name": "BaseBdev2", 00:26:37.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.031 "is_configured": false, 00:26:37.031 "data_offset": 0, 00:26:37.031 "data_size": 0 00:26:37.031 }, 00:26:37.031 { 00:26:37.031 "name": "BaseBdev3", 00:26:37.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.031 "is_configured": false, 00:26:37.031 "data_offset": 0, 00:26:37.031 "data_size": 0 00:26:37.031 } 00:26:37.031 ] 00:26:37.031 }' 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.031 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.291 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:37.291 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.291 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.291 [2024-10-07 07:47:36.825448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.291 BaseBdev2 00:26:37.291 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.291 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.292 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.292 [ 00:26:37.292 { 00:26:37.292 "name": "BaseBdev2", 00:26:37.292 "aliases": [ 00:26:37.292 "ef069aea-bffc-4bfe-83a8-d2b228278077" 00:26:37.292 ], 00:26:37.292 "product_name": "Malloc disk", 00:26:37.292 "block_size": 512, 00:26:37.292 "num_blocks": 65536, 00:26:37.292 "uuid": "ef069aea-bffc-4bfe-83a8-d2b228278077", 00:26:37.292 "assigned_rate_limits": { 00:26:37.292 "rw_ios_per_sec": 0, 00:26:37.292 "rw_mbytes_per_sec": 0, 00:26:37.292 "r_mbytes_per_sec": 0, 00:26:37.292 "w_mbytes_per_sec": 0 00:26:37.292 }, 00:26:37.292 "claimed": true, 00:26:37.292 "claim_type": "exclusive_write", 00:26:37.292 "zoned": false, 00:26:37.292 "supported_io_types": { 00:26:37.292 "read": true, 00:26:37.292 "write": true, 00:26:37.292 "unmap": true, 00:26:37.292 "flush": true, 00:26:37.292 "reset": true, 00:26:37.292 "nvme_admin": false, 00:26:37.292 "nvme_io": false, 00:26:37.292 "nvme_io_md": false, 00:26:37.552 "write_zeroes": true, 00:26:37.552 "zcopy": true, 00:26:37.552 "get_zone_info": false, 00:26:37.552 "zone_management": false, 00:26:37.552 "zone_append": false, 00:26:37.552 "compare": false, 00:26:37.552 "compare_and_write": false, 00:26:37.552 "abort": true, 00:26:37.552 "seek_hole": false, 00:26:37.552 "seek_data": false, 00:26:37.552 "copy": true, 00:26:37.552 "nvme_iov_md": false 00:26:37.552 }, 00:26:37.552 "memory_domains": [ 00:26:37.552 { 00:26:37.552 "dma_device_id": "system", 00:26:37.552 "dma_device_type": 1 00:26:37.552 }, 00:26:37.552 { 00:26:37.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.552 "dma_device_type": 2 00:26:37.552 } 00:26:37.552 ], 00:26:37.552 "driver_specific": {} 00:26:37.552 } 00:26:37.552 ] 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.552 "name": "Existed_Raid", 00:26:37.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.552 "strip_size_kb": 64, 00:26:37.552 "state": "configuring", 00:26:37.552 "raid_level": "raid5f", 00:26:37.552 "superblock": false, 00:26:37.552 "num_base_bdevs": 3, 00:26:37.552 "num_base_bdevs_discovered": 2, 00:26:37.552 "num_base_bdevs_operational": 3, 00:26:37.552 "base_bdevs_list": [ 00:26:37.552 { 00:26:37.552 "name": "BaseBdev1", 00:26:37.552 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:37.552 "is_configured": true, 00:26:37.552 "data_offset": 0, 00:26:37.552 "data_size": 65536 00:26:37.552 }, 00:26:37.552 { 00:26:37.552 "name": "BaseBdev2", 00:26:37.552 "uuid": "ef069aea-bffc-4bfe-83a8-d2b228278077", 00:26:37.552 "is_configured": true, 00:26:37.552 "data_offset": 0, 00:26:37.552 "data_size": 65536 00:26:37.552 }, 00:26:37.552 { 00:26:37.552 "name": "BaseBdev3", 00:26:37.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.552 "is_configured": false, 00:26:37.552 "data_offset": 0, 00:26:37.552 "data_size": 0 00:26:37.552 } 00:26:37.552 ] 00:26:37.552 }' 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.552 07:47:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.811 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.812 [2024-10-07 07:47:37.353136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:37.812 [2024-10-07 07:47:37.353397] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:37.812 [2024-10-07 07:47:37.353430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:37.812 [2024-10-07 07:47:37.353748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:37.812 [2024-10-07 07:47:37.359395] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:37.812 [2024-10-07 07:47:37.359521] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:37.812 [2024-10-07 07:47:37.359989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.812 BaseBdev3 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:37.812 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.071 [ 00:26:38.071 { 00:26:38.071 "name": "BaseBdev3", 00:26:38.071 "aliases": [ 00:26:38.071 "16040ab3-095e-4d9d-b732-ba91ee6c23e0" 00:26:38.071 ], 00:26:38.071 "product_name": "Malloc disk", 00:26:38.071 "block_size": 512, 00:26:38.071 "num_blocks": 65536, 00:26:38.071 "uuid": "16040ab3-095e-4d9d-b732-ba91ee6c23e0", 00:26:38.071 "assigned_rate_limits": { 00:26:38.071 "rw_ios_per_sec": 0, 00:26:38.071 "rw_mbytes_per_sec": 0, 00:26:38.071 "r_mbytes_per_sec": 0, 00:26:38.071 "w_mbytes_per_sec": 0 00:26:38.071 }, 00:26:38.071 "claimed": true, 00:26:38.071 "claim_type": "exclusive_write", 00:26:38.071 "zoned": false, 00:26:38.071 "supported_io_types": { 00:26:38.071 "read": true, 00:26:38.071 "write": true, 00:26:38.071 "unmap": true, 00:26:38.071 "flush": true, 00:26:38.071 "reset": true, 00:26:38.071 "nvme_admin": false, 00:26:38.071 "nvme_io": false, 00:26:38.071 "nvme_io_md": false, 00:26:38.071 "write_zeroes": true, 00:26:38.071 "zcopy": true, 00:26:38.071 "get_zone_info": false, 00:26:38.071 "zone_management": false, 00:26:38.071 "zone_append": false, 00:26:38.071 "compare": false, 00:26:38.071 "compare_and_write": false, 00:26:38.071 "abort": true, 00:26:38.071 "seek_hole": false, 00:26:38.071 "seek_data": false, 00:26:38.071 "copy": true, 00:26:38.071 "nvme_iov_md": false 00:26:38.071 }, 00:26:38.071 "memory_domains": [ 00:26:38.071 { 00:26:38.071 "dma_device_id": "system", 00:26:38.071 "dma_device_type": 1 00:26:38.071 }, 00:26:38.071 { 00:26:38.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.071 "dma_device_type": 2 00:26:38.071 } 00:26:38.071 ], 00:26:38.071 "driver_specific": {} 00:26:38.071 } 00:26:38.071 ] 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:38.071 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.072 "name": "Existed_Raid", 00:26:38.072 "uuid": "9d24293c-c035-4801-9d94-b5f5521f0d15", 00:26:38.072 "strip_size_kb": 64, 00:26:38.072 "state": "online", 00:26:38.072 "raid_level": "raid5f", 00:26:38.072 "superblock": false, 00:26:38.072 "num_base_bdevs": 3, 00:26:38.072 "num_base_bdevs_discovered": 3, 00:26:38.072 "num_base_bdevs_operational": 3, 00:26:38.072 "base_bdevs_list": [ 00:26:38.072 { 00:26:38.072 "name": "BaseBdev1", 00:26:38.072 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:38.072 "is_configured": true, 00:26:38.072 "data_offset": 0, 00:26:38.072 "data_size": 65536 00:26:38.072 }, 00:26:38.072 { 00:26:38.072 "name": "BaseBdev2", 00:26:38.072 "uuid": "ef069aea-bffc-4bfe-83a8-d2b228278077", 00:26:38.072 "is_configured": true, 00:26:38.072 "data_offset": 0, 00:26:38.072 "data_size": 65536 00:26:38.072 }, 00:26:38.072 { 00:26:38.072 "name": "BaseBdev3", 00:26:38.072 "uuid": "16040ab3-095e-4d9d-b732-ba91ee6c23e0", 00:26:38.072 "is_configured": true, 00:26:38.072 "data_offset": 0, 00:26:38.072 "data_size": 65536 00:26:38.072 } 00:26:38.072 ] 00:26:38.072 }' 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.072 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 [2024-10-07 07:47:37.859203] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:38.331 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.590 "name": "Existed_Raid", 00:26:38.590 "aliases": [ 00:26:38.590 "9d24293c-c035-4801-9d94-b5f5521f0d15" 00:26:38.590 ], 00:26:38.590 "product_name": "Raid Volume", 00:26:38.590 "block_size": 512, 00:26:38.590 "num_blocks": 131072, 00:26:38.590 "uuid": "9d24293c-c035-4801-9d94-b5f5521f0d15", 00:26:38.590 "assigned_rate_limits": { 00:26:38.590 "rw_ios_per_sec": 0, 00:26:38.590 "rw_mbytes_per_sec": 0, 00:26:38.590 "r_mbytes_per_sec": 0, 00:26:38.590 "w_mbytes_per_sec": 0 00:26:38.590 }, 00:26:38.590 "claimed": false, 00:26:38.590 "zoned": false, 00:26:38.590 "supported_io_types": { 00:26:38.590 "read": true, 00:26:38.590 "write": true, 00:26:38.590 "unmap": false, 00:26:38.590 "flush": false, 00:26:38.590 "reset": true, 00:26:38.590 "nvme_admin": false, 00:26:38.590 "nvme_io": false, 00:26:38.590 "nvme_io_md": false, 00:26:38.590 "write_zeroes": true, 00:26:38.590 "zcopy": false, 00:26:38.590 "get_zone_info": false, 00:26:38.590 "zone_management": false, 00:26:38.590 "zone_append": false, 00:26:38.590 "compare": false, 00:26:38.590 "compare_and_write": false, 00:26:38.590 "abort": false, 00:26:38.590 "seek_hole": false, 00:26:38.590 "seek_data": false, 00:26:38.590 "copy": false, 00:26:38.590 "nvme_iov_md": false 00:26:38.590 }, 00:26:38.590 "driver_specific": { 00:26:38.590 "raid": { 00:26:38.590 "uuid": "9d24293c-c035-4801-9d94-b5f5521f0d15", 00:26:38.590 "strip_size_kb": 64, 00:26:38.590 "state": "online", 00:26:38.590 "raid_level": "raid5f", 00:26:38.590 "superblock": false, 00:26:38.590 "num_base_bdevs": 3, 00:26:38.590 "num_base_bdevs_discovered": 3, 00:26:38.590 "num_base_bdevs_operational": 3, 00:26:38.590 "base_bdevs_list": [ 00:26:38.590 { 00:26:38.590 "name": "BaseBdev1", 00:26:38.590 "uuid": "b763733a-54d8-4720-85b8-01f0fe6149cd", 00:26:38.590 "is_configured": true, 00:26:38.590 "data_offset": 0, 00:26:38.590 "data_size": 65536 00:26:38.590 }, 00:26:38.590 { 00:26:38.590 "name": "BaseBdev2", 00:26:38.590 "uuid": "ef069aea-bffc-4bfe-83a8-d2b228278077", 00:26:38.590 "is_configured": true, 00:26:38.590 "data_offset": 0, 00:26:38.590 "data_size": 65536 00:26:38.590 }, 00:26:38.590 { 00:26:38.590 "name": "BaseBdev3", 00:26:38.590 "uuid": "16040ab3-095e-4d9d-b732-ba91ee6c23e0", 00:26:38.590 "is_configured": true, 00:26:38.590 "data_offset": 0, 00:26:38.590 "data_size": 65536 00:26:38.590 } 00:26:38.590 ] 00:26:38.590 } 00:26:38.590 } 00:26:38.590 }' 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:38.590 BaseBdev2 00:26:38.590 BaseBdev3' 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:38.590 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:38.591 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.591 07:47:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.591 07:47:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.591 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.591 [2024-10-07 07:47:38.115023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:38.850 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.850 "name": "Existed_Raid", 00:26:38.850 "uuid": "9d24293c-c035-4801-9d94-b5f5521f0d15", 00:26:38.850 "strip_size_kb": 64, 00:26:38.850 "state": "online", 00:26:38.850 "raid_level": "raid5f", 00:26:38.850 "superblock": false, 00:26:38.850 "num_base_bdevs": 3, 00:26:38.850 "num_base_bdevs_discovered": 2, 00:26:38.850 "num_base_bdevs_operational": 2, 00:26:38.851 "base_bdevs_list": [ 00:26:38.851 { 00:26:38.851 "name": null, 00:26:38.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.851 "is_configured": false, 00:26:38.851 "data_offset": 0, 00:26:38.851 "data_size": 65536 00:26:38.851 }, 00:26:38.851 { 00:26:38.851 "name": "BaseBdev2", 00:26:38.851 "uuid": "ef069aea-bffc-4bfe-83a8-d2b228278077", 00:26:38.851 "is_configured": true, 00:26:38.851 "data_offset": 0, 00:26:38.851 "data_size": 65536 00:26:38.851 }, 00:26:38.851 { 00:26:38.851 "name": "BaseBdev3", 00:26:38.851 "uuid": "16040ab3-095e-4d9d-b732-ba91ee6c23e0", 00:26:38.851 "is_configured": true, 00:26:38.851 "data_offset": 0, 00:26:38.851 "data_size": 65536 00:26:38.851 } 00:26:38.851 ] 00:26:38.851 }' 00:26:38.851 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.851 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.419 [2024-10-07 07:47:38.719801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:39.419 [2024-10-07 07:47:38.719903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:39.419 [2024-10-07 07:47:38.818995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.419 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.419 [2024-10-07 07:47:38.875058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:39.419 [2024-10-07 07:47:38.875122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:39.679 07:47:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.679 BaseBdev2 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.679 [ 00:26:39.679 { 00:26:39.679 "name": "BaseBdev2", 00:26:39.679 "aliases": [ 00:26:39.679 "5a5c04ec-0de7-4953-9887-0c4a94431682" 00:26:39.679 ], 00:26:39.679 "product_name": "Malloc disk", 00:26:39.679 "block_size": 512, 00:26:39.679 "num_blocks": 65536, 00:26:39.679 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:39.679 "assigned_rate_limits": { 00:26:39.679 "rw_ios_per_sec": 0, 00:26:39.679 "rw_mbytes_per_sec": 0, 00:26:39.679 "r_mbytes_per_sec": 0, 00:26:39.679 "w_mbytes_per_sec": 0 00:26:39.679 }, 00:26:39.679 "claimed": false, 00:26:39.679 "zoned": false, 00:26:39.679 "supported_io_types": { 00:26:39.679 "read": true, 00:26:39.679 "write": true, 00:26:39.679 "unmap": true, 00:26:39.679 "flush": true, 00:26:39.679 "reset": true, 00:26:39.679 "nvme_admin": false, 00:26:39.679 "nvme_io": false, 00:26:39.679 "nvme_io_md": false, 00:26:39.679 "write_zeroes": true, 00:26:39.679 "zcopy": true, 00:26:39.679 "get_zone_info": false, 00:26:39.679 "zone_management": false, 00:26:39.679 "zone_append": false, 00:26:39.679 "compare": false, 00:26:39.679 "compare_and_write": false, 00:26:39.679 "abort": true, 00:26:39.679 "seek_hole": false, 00:26:39.679 "seek_data": false, 00:26:39.679 "copy": true, 00:26:39.679 "nvme_iov_md": false 00:26:39.679 }, 00:26:39.679 "memory_domains": [ 00:26:39.679 { 00:26:39.679 "dma_device_id": "system", 00:26:39.679 "dma_device_type": 1 00:26:39.679 }, 00:26:39.679 { 00:26:39.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.679 "dma_device_type": 2 00:26:39.679 } 00:26:39.679 ], 00:26:39.679 "driver_specific": {} 00:26:39.679 } 00:26:39.679 ] 00:26:39.679 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.680 BaseBdev3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.680 [ 00:26:39.680 { 00:26:39.680 "name": "BaseBdev3", 00:26:39.680 "aliases": [ 00:26:39.680 "4d78b8b9-df09-4986-b1aa-e36db77deded" 00:26:39.680 ], 00:26:39.680 "product_name": "Malloc disk", 00:26:39.680 "block_size": 512, 00:26:39.680 "num_blocks": 65536, 00:26:39.680 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:39.680 "assigned_rate_limits": { 00:26:39.680 "rw_ios_per_sec": 0, 00:26:39.680 "rw_mbytes_per_sec": 0, 00:26:39.680 "r_mbytes_per_sec": 0, 00:26:39.680 "w_mbytes_per_sec": 0 00:26:39.680 }, 00:26:39.680 "claimed": false, 00:26:39.680 "zoned": false, 00:26:39.680 "supported_io_types": { 00:26:39.680 "read": true, 00:26:39.680 "write": true, 00:26:39.680 "unmap": true, 00:26:39.680 "flush": true, 00:26:39.680 "reset": true, 00:26:39.680 "nvme_admin": false, 00:26:39.680 "nvme_io": false, 00:26:39.680 "nvme_io_md": false, 00:26:39.680 "write_zeroes": true, 00:26:39.680 "zcopy": true, 00:26:39.680 "get_zone_info": false, 00:26:39.680 "zone_management": false, 00:26:39.680 "zone_append": false, 00:26:39.680 "compare": false, 00:26:39.680 "compare_and_write": false, 00:26:39.680 "abort": true, 00:26:39.680 "seek_hole": false, 00:26:39.680 "seek_data": false, 00:26:39.680 "copy": true, 00:26:39.680 "nvme_iov_md": false 00:26:39.680 }, 00:26:39.680 "memory_domains": [ 00:26:39.680 { 00:26:39.680 "dma_device_id": "system", 00:26:39.680 "dma_device_type": 1 00:26:39.680 }, 00:26:39.680 { 00:26:39.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.680 "dma_device_type": 2 00:26:39.680 } 00:26:39.680 ], 00:26:39.680 "driver_specific": {} 00:26:39.680 } 00:26:39.680 ] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.680 [2024-10-07 07:47:39.181839] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:39.680 [2024-10-07 07:47:39.181894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:39.680 [2024-10-07 07:47:39.181920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:39.680 [2024-10-07 07:47:39.184099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:39.680 "name": "Existed_Raid", 00:26:39.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.680 "strip_size_kb": 64, 00:26:39.680 "state": "configuring", 00:26:39.680 "raid_level": "raid5f", 00:26:39.680 "superblock": false, 00:26:39.680 "num_base_bdevs": 3, 00:26:39.680 "num_base_bdevs_discovered": 2, 00:26:39.680 "num_base_bdevs_operational": 3, 00:26:39.680 "base_bdevs_list": [ 00:26:39.680 { 00:26:39.680 "name": "BaseBdev1", 00:26:39.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.680 "is_configured": false, 00:26:39.680 "data_offset": 0, 00:26:39.680 "data_size": 0 00:26:39.680 }, 00:26:39.680 { 00:26:39.680 "name": "BaseBdev2", 00:26:39.680 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:39.680 "is_configured": true, 00:26:39.680 "data_offset": 0, 00:26:39.680 "data_size": 65536 00:26:39.680 }, 00:26:39.680 { 00:26:39.680 "name": "BaseBdev3", 00:26:39.680 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:39.680 "is_configured": true, 00:26:39.680 "data_offset": 0, 00:26:39.680 "data_size": 65536 00:26:39.680 } 00:26:39.680 ] 00:26:39.680 }' 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:39.680 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.248 [2024-10-07 07:47:39.645949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.248 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.249 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.249 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.249 "name": "Existed_Raid", 00:26:40.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.249 "strip_size_kb": 64, 00:26:40.249 "state": "configuring", 00:26:40.249 "raid_level": "raid5f", 00:26:40.249 "superblock": false, 00:26:40.249 "num_base_bdevs": 3, 00:26:40.249 "num_base_bdevs_discovered": 1, 00:26:40.249 "num_base_bdevs_operational": 3, 00:26:40.249 "base_bdevs_list": [ 00:26:40.249 { 00:26:40.249 "name": "BaseBdev1", 00:26:40.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.249 "is_configured": false, 00:26:40.249 "data_offset": 0, 00:26:40.249 "data_size": 0 00:26:40.249 }, 00:26:40.249 { 00:26:40.249 "name": null, 00:26:40.249 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:40.249 "is_configured": false, 00:26:40.249 "data_offset": 0, 00:26:40.249 "data_size": 65536 00:26:40.249 }, 00:26:40.249 { 00:26:40.249 "name": "BaseBdev3", 00:26:40.249 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:40.249 "is_configured": true, 00:26:40.249 "data_offset": 0, 00:26:40.249 "data_size": 65536 00:26:40.249 } 00:26:40.249 ] 00:26:40.249 }' 00:26:40.249 07:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.249 07:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 [2024-10-07 07:47:40.212533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.816 BaseBdev1 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.816 [ 00:26:40.816 { 00:26:40.816 "name": "BaseBdev1", 00:26:40.816 "aliases": [ 00:26:40.816 "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a" 00:26:40.816 ], 00:26:40.816 "product_name": "Malloc disk", 00:26:40.816 "block_size": 512, 00:26:40.816 "num_blocks": 65536, 00:26:40.816 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:40.816 "assigned_rate_limits": { 00:26:40.816 "rw_ios_per_sec": 0, 00:26:40.816 "rw_mbytes_per_sec": 0, 00:26:40.816 "r_mbytes_per_sec": 0, 00:26:40.816 "w_mbytes_per_sec": 0 00:26:40.816 }, 00:26:40.816 "claimed": true, 00:26:40.816 "claim_type": "exclusive_write", 00:26:40.816 "zoned": false, 00:26:40.816 "supported_io_types": { 00:26:40.816 "read": true, 00:26:40.816 "write": true, 00:26:40.816 "unmap": true, 00:26:40.816 "flush": true, 00:26:40.816 "reset": true, 00:26:40.816 "nvme_admin": false, 00:26:40.816 "nvme_io": false, 00:26:40.816 "nvme_io_md": false, 00:26:40.816 "write_zeroes": true, 00:26:40.816 "zcopy": true, 00:26:40.816 "get_zone_info": false, 00:26:40.816 "zone_management": false, 00:26:40.816 "zone_append": false, 00:26:40.816 "compare": false, 00:26:40.816 "compare_and_write": false, 00:26:40.816 "abort": true, 00:26:40.816 "seek_hole": false, 00:26:40.816 "seek_data": false, 00:26:40.816 "copy": true, 00:26:40.816 "nvme_iov_md": false 00:26:40.816 }, 00:26:40.816 "memory_domains": [ 00:26:40.816 { 00:26:40.816 "dma_device_id": "system", 00:26:40.816 "dma_device_type": 1 00:26:40.816 }, 00:26:40.816 { 00:26:40.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.816 "dma_device_type": 2 00:26:40.816 } 00:26:40.816 ], 00:26:40.816 "driver_specific": {} 00:26:40.816 } 00:26:40.816 ] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.816 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.817 "name": "Existed_Raid", 00:26:40.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.817 "strip_size_kb": 64, 00:26:40.817 "state": "configuring", 00:26:40.817 "raid_level": "raid5f", 00:26:40.817 "superblock": false, 00:26:40.817 "num_base_bdevs": 3, 00:26:40.817 "num_base_bdevs_discovered": 2, 00:26:40.817 "num_base_bdevs_operational": 3, 00:26:40.817 "base_bdevs_list": [ 00:26:40.817 { 00:26:40.817 "name": "BaseBdev1", 00:26:40.817 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:40.817 "is_configured": true, 00:26:40.817 "data_offset": 0, 00:26:40.817 "data_size": 65536 00:26:40.817 }, 00:26:40.817 { 00:26:40.817 "name": null, 00:26:40.817 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:40.817 "is_configured": false, 00:26:40.817 "data_offset": 0, 00:26:40.817 "data_size": 65536 00:26:40.817 }, 00:26:40.817 { 00:26:40.817 "name": "BaseBdev3", 00:26:40.817 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:40.817 "is_configured": true, 00:26:40.817 "data_offset": 0, 00:26:40.817 "data_size": 65536 00:26:40.817 } 00:26:40.817 ] 00:26:40.817 }' 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.817 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.384 [2024-10-07 07:47:40.740773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.384 "name": "Existed_Raid", 00:26:41.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.384 "strip_size_kb": 64, 00:26:41.384 "state": "configuring", 00:26:41.384 "raid_level": "raid5f", 00:26:41.384 "superblock": false, 00:26:41.384 "num_base_bdevs": 3, 00:26:41.384 "num_base_bdevs_discovered": 1, 00:26:41.384 "num_base_bdevs_operational": 3, 00:26:41.384 "base_bdevs_list": [ 00:26:41.384 { 00:26:41.384 "name": "BaseBdev1", 00:26:41.384 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:41.384 "is_configured": true, 00:26:41.384 "data_offset": 0, 00:26:41.384 "data_size": 65536 00:26:41.384 }, 00:26:41.384 { 00:26:41.384 "name": null, 00:26:41.384 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:41.384 "is_configured": false, 00:26:41.384 "data_offset": 0, 00:26:41.384 "data_size": 65536 00:26:41.384 }, 00:26:41.384 { 00:26:41.384 "name": null, 00:26:41.384 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:41.384 "is_configured": false, 00:26:41.384 "data_offset": 0, 00:26:41.384 "data_size": 65536 00:26:41.384 } 00:26:41.384 ] 00:26:41.384 }' 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.384 07:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.642 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.642 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.642 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.642 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.901 [2024-10-07 07:47:41.244915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.901 "name": "Existed_Raid", 00:26:41.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.901 "strip_size_kb": 64, 00:26:41.901 "state": "configuring", 00:26:41.901 "raid_level": "raid5f", 00:26:41.901 "superblock": false, 00:26:41.901 "num_base_bdevs": 3, 00:26:41.901 "num_base_bdevs_discovered": 2, 00:26:41.901 "num_base_bdevs_operational": 3, 00:26:41.901 "base_bdevs_list": [ 00:26:41.901 { 00:26:41.901 "name": "BaseBdev1", 00:26:41.901 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:41.901 "is_configured": true, 00:26:41.901 "data_offset": 0, 00:26:41.901 "data_size": 65536 00:26:41.901 }, 00:26:41.901 { 00:26:41.901 "name": null, 00:26:41.901 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:41.901 "is_configured": false, 00:26:41.901 "data_offset": 0, 00:26:41.901 "data_size": 65536 00:26:41.901 }, 00:26:41.901 { 00:26:41.901 "name": "BaseBdev3", 00:26:41.901 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:41.901 "is_configured": true, 00:26:41.901 "data_offset": 0, 00:26:41.901 "data_size": 65536 00:26:41.901 } 00:26:41.901 ] 00:26:41.901 }' 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.901 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.160 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.160 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.160 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.160 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:42.160 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 [2024-10-07 07:47:41.749075] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:42.419 "name": "Existed_Raid", 00:26:42.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.419 "strip_size_kb": 64, 00:26:42.419 "state": "configuring", 00:26:42.419 "raid_level": "raid5f", 00:26:42.419 "superblock": false, 00:26:42.419 "num_base_bdevs": 3, 00:26:42.419 "num_base_bdevs_discovered": 1, 00:26:42.419 "num_base_bdevs_operational": 3, 00:26:42.419 "base_bdevs_list": [ 00:26:42.419 { 00:26:42.419 "name": null, 00:26:42.419 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:42.419 "is_configured": false, 00:26:42.419 "data_offset": 0, 00:26:42.419 "data_size": 65536 00:26:42.419 }, 00:26:42.419 { 00:26:42.419 "name": null, 00:26:42.419 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:42.419 "is_configured": false, 00:26:42.419 "data_offset": 0, 00:26:42.419 "data_size": 65536 00:26:42.419 }, 00:26:42.419 { 00:26:42.419 "name": "BaseBdev3", 00:26:42.419 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:42.419 "is_configured": true, 00:26:42.419 "data_offset": 0, 00:26:42.419 "data_size": 65536 00:26:42.419 } 00:26:42.419 ] 00:26:42.419 }' 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:42.419 07:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 [2024-10-07 07:47:42.344617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:42.989 "name": "Existed_Raid", 00:26:42.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.989 "strip_size_kb": 64, 00:26:42.989 "state": "configuring", 00:26:42.989 "raid_level": "raid5f", 00:26:42.989 "superblock": false, 00:26:42.989 "num_base_bdevs": 3, 00:26:42.989 "num_base_bdevs_discovered": 2, 00:26:42.989 "num_base_bdevs_operational": 3, 00:26:42.989 "base_bdevs_list": [ 00:26:42.989 { 00:26:42.989 "name": null, 00:26:42.989 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:42.989 "is_configured": false, 00:26:42.989 "data_offset": 0, 00:26:42.989 "data_size": 65536 00:26:42.989 }, 00:26:42.989 { 00:26:42.989 "name": "BaseBdev2", 00:26:42.989 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:42.989 "is_configured": true, 00:26:42.989 "data_offset": 0, 00:26:42.989 "data_size": 65536 00:26:42.989 }, 00:26:42.989 { 00:26:42.989 "name": "BaseBdev3", 00:26:42.989 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:42.989 "is_configured": true, 00:26:42.989 "data_offset": 0, 00:26:42.989 "data_size": 65536 00:26:42.989 } 00:26:42.989 ] 00:26:42.989 }' 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:42.989 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3a12aeb3-0c36-4a1e-af9b-d72c3e42479a 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 [2024-10-07 07:47:42.954473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:43.557 [2024-10-07 07:47:42.954752] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:43.557 [2024-10-07 07:47:42.954813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:43.557 [2024-10-07 07:47:42.955193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:43.557 NewBaseBdev 00:26:43.557 [2024-10-07 07:47:42.961008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:43.557 [2024-10-07 07:47:42.961031] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:43.557 [2024-10-07 07:47:42.961346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 [ 00:26:43.557 { 00:26:43.557 "name": "NewBaseBdev", 00:26:43.557 "aliases": [ 00:26:43.557 "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a" 00:26:43.557 ], 00:26:43.557 "product_name": "Malloc disk", 00:26:43.557 "block_size": 512, 00:26:43.557 "num_blocks": 65536, 00:26:43.557 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:43.557 "assigned_rate_limits": { 00:26:43.557 "rw_ios_per_sec": 0, 00:26:43.557 "rw_mbytes_per_sec": 0, 00:26:43.557 "r_mbytes_per_sec": 0, 00:26:43.557 "w_mbytes_per_sec": 0 00:26:43.557 }, 00:26:43.557 "claimed": true, 00:26:43.557 "claim_type": "exclusive_write", 00:26:43.557 "zoned": false, 00:26:43.557 "supported_io_types": { 00:26:43.557 "read": true, 00:26:43.557 "write": true, 00:26:43.557 "unmap": true, 00:26:43.557 "flush": true, 00:26:43.557 "reset": true, 00:26:43.557 "nvme_admin": false, 00:26:43.557 "nvme_io": false, 00:26:43.557 "nvme_io_md": false, 00:26:43.557 "write_zeroes": true, 00:26:43.557 "zcopy": true, 00:26:43.557 "get_zone_info": false, 00:26:43.557 "zone_management": false, 00:26:43.557 "zone_append": false, 00:26:43.557 "compare": false, 00:26:43.557 "compare_and_write": false, 00:26:43.557 "abort": true, 00:26:43.557 "seek_hole": false, 00:26:43.557 "seek_data": false, 00:26:43.557 "copy": true, 00:26:43.557 "nvme_iov_md": false 00:26:43.557 }, 00:26:43.557 "memory_domains": [ 00:26:43.557 { 00:26:43.557 "dma_device_id": "system", 00:26:43.557 "dma_device_type": 1 00:26:43.557 }, 00:26:43.557 { 00:26:43.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.557 "dma_device_type": 2 00:26:43.557 } 00:26:43.557 ], 00:26:43.557 "driver_specific": {} 00:26:43.557 } 00:26:43.557 ] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:43.557 07:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:43.557 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:43.557 "name": "Existed_Raid", 00:26:43.557 "uuid": "1796db3f-3c94-4a9f-bc69-84d8c80958a5", 00:26:43.557 "strip_size_kb": 64, 00:26:43.557 "state": "online", 00:26:43.557 "raid_level": "raid5f", 00:26:43.557 "superblock": false, 00:26:43.557 "num_base_bdevs": 3, 00:26:43.557 "num_base_bdevs_discovered": 3, 00:26:43.557 "num_base_bdevs_operational": 3, 00:26:43.557 "base_bdevs_list": [ 00:26:43.557 { 00:26:43.557 "name": "NewBaseBdev", 00:26:43.557 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:43.557 "is_configured": true, 00:26:43.557 "data_offset": 0, 00:26:43.557 "data_size": 65536 00:26:43.557 }, 00:26:43.557 { 00:26:43.557 "name": "BaseBdev2", 00:26:43.557 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:43.557 "is_configured": true, 00:26:43.557 "data_offset": 0, 00:26:43.557 "data_size": 65536 00:26:43.557 }, 00:26:43.557 { 00:26:43.557 "name": "BaseBdev3", 00:26:43.557 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:43.557 "is_configured": true, 00:26:43.557 "data_offset": 0, 00:26:43.557 "data_size": 65536 00:26:43.557 } 00:26:43.557 ] 00:26:43.557 }' 00:26:43.557 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:43.557 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.125 [2024-10-07 07:47:43.440167] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:44.125 "name": "Existed_Raid", 00:26:44.125 "aliases": [ 00:26:44.125 "1796db3f-3c94-4a9f-bc69-84d8c80958a5" 00:26:44.125 ], 00:26:44.125 "product_name": "Raid Volume", 00:26:44.125 "block_size": 512, 00:26:44.125 "num_blocks": 131072, 00:26:44.125 "uuid": "1796db3f-3c94-4a9f-bc69-84d8c80958a5", 00:26:44.125 "assigned_rate_limits": { 00:26:44.125 "rw_ios_per_sec": 0, 00:26:44.125 "rw_mbytes_per_sec": 0, 00:26:44.125 "r_mbytes_per_sec": 0, 00:26:44.125 "w_mbytes_per_sec": 0 00:26:44.125 }, 00:26:44.125 "claimed": false, 00:26:44.125 "zoned": false, 00:26:44.125 "supported_io_types": { 00:26:44.125 "read": true, 00:26:44.125 "write": true, 00:26:44.125 "unmap": false, 00:26:44.125 "flush": false, 00:26:44.125 "reset": true, 00:26:44.125 "nvme_admin": false, 00:26:44.125 "nvme_io": false, 00:26:44.125 "nvme_io_md": false, 00:26:44.125 "write_zeroes": true, 00:26:44.125 "zcopy": false, 00:26:44.125 "get_zone_info": false, 00:26:44.125 "zone_management": false, 00:26:44.125 "zone_append": false, 00:26:44.125 "compare": false, 00:26:44.125 "compare_and_write": false, 00:26:44.125 "abort": false, 00:26:44.125 "seek_hole": false, 00:26:44.125 "seek_data": false, 00:26:44.125 "copy": false, 00:26:44.125 "nvme_iov_md": false 00:26:44.125 }, 00:26:44.125 "driver_specific": { 00:26:44.125 "raid": { 00:26:44.125 "uuid": "1796db3f-3c94-4a9f-bc69-84d8c80958a5", 00:26:44.125 "strip_size_kb": 64, 00:26:44.125 "state": "online", 00:26:44.125 "raid_level": "raid5f", 00:26:44.125 "superblock": false, 00:26:44.125 "num_base_bdevs": 3, 00:26:44.125 "num_base_bdevs_discovered": 3, 00:26:44.125 "num_base_bdevs_operational": 3, 00:26:44.125 "base_bdevs_list": [ 00:26:44.125 { 00:26:44.125 "name": "NewBaseBdev", 00:26:44.125 "uuid": "3a12aeb3-0c36-4a1e-af9b-d72c3e42479a", 00:26:44.125 "is_configured": true, 00:26:44.125 "data_offset": 0, 00:26:44.125 "data_size": 65536 00:26:44.125 }, 00:26:44.125 { 00:26:44.125 "name": "BaseBdev2", 00:26:44.125 "uuid": "5a5c04ec-0de7-4953-9887-0c4a94431682", 00:26:44.125 "is_configured": true, 00:26:44.125 "data_offset": 0, 00:26:44.125 "data_size": 65536 00:26:44.125 }, 00:26:44.125 { 00:26:44.125 "name": "BaseBdev3", 00:26:44.125 "uuid": "4d78b8b9-df09-4986-b1aa-e36db77deded", 00:26:44.125 "is_configured": true, 00:26:44.125 "data_offset": 0, 00:26:44.125 "data_size": 65536 00:26:44.125 } 00:26:44.125 ] 00:26:44.125 } 00:26:44.125 } 00:26:44.125 }' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:44.125 BaseBdev2 00:26:44.125 BaseBdev3' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.125 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.384 [2024-10-07 07:47:43.711988] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:44.384 [2024-10-07 07:47:43.712142] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:44.384 [2024-10-07 07:47:43.712373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:44.384 [2024-10-07 07:47:43.712783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:44.384 [2024-10-07 07:47:43.712929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80121 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 80121 ']' 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # kill -0 80121 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # uname 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 80121 00:26:44.384 killing process with pid 80121 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 80121' 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # kill 80121 00:26:44.384 [2024-10-07 07:47:43.758426] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:44.384 07:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@977 -- # wait 80121 00:26:44.642 [2024-10-07 07:47:44.079421] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:46.078 00:26:46.078 real 0m11.023s 00:26:46.078 user 0m17.479s 00:26:46.078 sys 0m1.956s 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.078 ************************************ 00:26:46.078 END TEST raid5f_state_function_test 00:26:46.078 ************************************ 00:26:46.078 07:47:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:26:46.078 07:47:45 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:26:46.078 07:47:45 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:46.078 07:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:46.078 ************************************ 00:26:46.078 START TEST raid5f_state_function_test_sb 00:26:46.078 ************************************ 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid5f 3 true 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:46.078 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:46.079 Process raid pid: 80743 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80743 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80743' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80743 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 80743 ']' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:46.079 07:47:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.079 [2024-10-07 07:47:45.571993] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:46.079 [2024-10-07 07:47:45.572317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.337 [2024-10-07 07:47:45.734681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.596 [2024-10-07 07:47:45.958521] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.855 [2024-10-07 07:47:46.176384] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:46.855 [2024-10-07 07:47:46.176632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:47.114 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.115 [2024-10-07 07:47:46.581531] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:47.115 [2024-10-07 07:47:46.581741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:47.115 [2024-10-07 07:47:46.581841] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:47.115 [2024-10-07 07:47:46.581894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:47.115 [2024-10-07 07:47:46.582056] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:47.115 [2024-10-07 07:47:46.582102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.115 "name": "Existed_Raid", 00:26:47.115 "uuid": "c87c1a65-a42b-44c0-814d-417aed3a7f95", 00:26:47.115 "strip_size_kb": 64, 00:26:47.115 "state": "configuring", 00:26:47.115 "raid_level": "raid5f", 00:26:47.115 "superblock": true, 00:26:47.115 "num_base_bdevs": 3, 00:26:47.115 "num_base_bdevs_discovered": 0, 00:26:47.115 "num_base_bdevs_operational": 3, 00:26:47.115 "base_bdevs_list": [ 00:26:47.115 { 00:26:47.115 "name": "BaseBdev1", 00:26:47.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.115 "is_configured": false, 00:26:47.115 "data_offset": 0, 00:26:47.115 "data_size": 0 00:26:47.115 }, 00:26:47.115 { 00:26:47.115 "name": "BaseBdev2", 00:26:47.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.115 "is_configured": false, 00:26:47.115 "data_offset": 0, 00:26:47.115 "data_size": 0 00:26:47.115 }, 00:26:47.115 { 00:26:47.115 "name": "BaseBdev3", 00:26:47.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.115 "is_configured": false, 00:26:47.115 "data_offset": 0, 00:26:47.115 "data_size": 0 00:26:47.115 } 00:26:47.115 ] 00:26:47.115 }' 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.115 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 07:47:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:47.684 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.684 07:47:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 [2024-10-07 07:47:46.997509] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:47.684 [2024-10-07 07:47:46.997682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 [2024-10-07 07:47:47.009557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:47.684 [2024-10-07 07:47:47.009730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:47.684 [2024-10-07 07:47:47.009842] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:47.684 [2024-10-07 07:47:47.009893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:47.684 [2024-10-07 07:47:47.009983] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:47.684 [2024-10-07 07:47:47.010031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 [2024-10-07 07:47:47.073819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:47.684 BaseBdev1 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.684 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.684 [ 00:26:47.684 { 00:26:47.684 "name": "BaseBdev1", 00:26:47.684 "aliases": [ 00:26:47.684 "62fb35d3-a477-47c7-a6be-804515367361" 00:26:47.684 ], 00:26:47.684 "product_name": "Malloc disk", 00:26:47.684 "block_size": 512, 00:26:47.684 "num_blocks": 65536, 00:26:47.684 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:47.684 "assigned_rate_limits": { 00:26:47.684 "rw_ios_per_sec": 0, 00:26:47.684 "rw_mbytes_per_sec": 0, 00:26:47.684 "r_mbytes_per_sec": 0, 00:26:47.684 "w_mbytes_per_sec": 0 00:26:47.684 }, 00:26:47.684 "claimed": true, 00:26:47.684 "claim_type": "exclusive_write", 00:26:47.684 "zoned": false, 00:26:47.684 "supported_io_types": { 00:26:47.684 "read": true, 00:26:47.684 "write": true, 00:26:47.684 "unmap": true, 00:26:47.684 "flush": true, 00:26:47.684 "reset": true, 00:26:47.684 "nvme_admin": false, 00:26:47.684 "nvme_io": false, 00:26:47.684 "nvme_io_md": false, 00:26:47.684 "write_zeroes": true, 00:26:47.684 "zcopy": true, 00:26:47.684 "get_zone_info": false, 00:26:47.684 "zone_management": false, 00:26:47.684 "zone_append": false, 00:26:47.684 "compare": false, 00:26:47.684 "compare_and_write": false, 00:26:47.684 "abort": true, 00:26:47.684 "seek_hole": false, 00:26:47.684 "seek_data": false, 00:26:47.684 "copy": true, 00:26:47.684 "nvme_iov_md": false 00:26:47.684 }, 00:26:47.684 "memory_domains": [ 00:26:47.684 { 00:26:47.684 "dma_device_id": "system", 00:26:47.684 "dma_device_type": 1 00:26:47.684 }, 00:26:47.684 { 00:26:47.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.684 "dma_device_type": 2 00:26:47.684 } 00:26:47.684 ], 00:26:47.685 "driver_specific": {} 00:26:47.685 } 00:26:47.685 ] 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.685 "name": "Existed_Raid", 00:26:47.685 "uuid": "d6508f68-b5bf-4e3c-b8a9-a222ebde7383", 00:26:47.685 "strip_size_kb": 64, 00:26:47.685 "state": "configuring", 00:26:47.685 "raid_level": "raid5f", 00:26:47.685 "superblock": true, 00:26:47.685 "num_base_bdevs": 3, 00:26:47.685 "num_base_bdevs_discovered": 1, 00:26:47.685 "num_base_bdevs_operational": 3, 00:26:47.685 "base_bdevs_list": [ 00:26:47.685 { 00:26:47.685 "name": "BaseBdev1", 00:26:47.685 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:47.685 "is_configured": true, 00:26:47.685 "data_offset": 2048, 00:26:47.685 "data_size": 63488 00:26:47.685 }, 00:26:47.685 { 00:26:47.685 "name": "BaseBdev2", 00:26:47.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.685 "is_configured": false, 00:26:47.685 "data_offset": 0, 00:26:47.685 "data_size": 0 00:26:47.685 }, 00:26:47.685 { 00:26:47.685 "name": "BaseBdev3", 00:26:47.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.685 "is_configured": false, 00:26:47.685 "data_offset": 0, 00:26:47.685 "data_size": 0 00:26:47.685 } 00:26:47.685 ] 00:26:47.685 }' 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.685 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.253 [2024-10-07 07:47:47.533968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:48.253 [2024-10-07 07:47:47.534158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.253 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.253 [2024-10-07 07:47:47.542017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:48.253 [2024-10-07 07:47:47.544480] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:48.253 [2024-10-07 07:47:47.544656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:48.254 [2024-10-07 07:47:47.544765] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:48.254 [2024-10-07 07:47:47.544870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.254 "name": "Existed_Raid", 00:26:48.254 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:48.254 "strip_size_kb": 64, 00:26:48.254 "state": "configuring", 00:26:48.254 "raid_level": "raid5f", 00:26:48.254 "superblock": true, 00:26:48.254 "num_base_bdevs": 3, 00:26:48.254 "num_base_bdevs_discovered": 1, 00:26:48.254 "num_base_bdevs_operational": 3, 00:26:48.254 "base_bdevs_list": [ 00:26:48.254 { 00:26:48.254 "name": "BaseBdev1", 00:26:48.254 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:48.254 "is_configured": true, 00:26:48.254 "data_offset": 2048, 00:26:48.254 "data_size": 63488 00:26:48.254 }, 00:26:48.254 { 00:26:48.254 "name": "BaseBdev2", 00:26:48.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.254 "is_configured": false, 00:26:48.254 "data_offset": 0, 00:26:48.254 "data_size": 0 00:26:48.254 }, 00:26:48.254 { 00:26:48.254 "name": "BaseBdev3", 00:26:48.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.254 "is_configured": false, 00:26:48.254 "data_offset": 0, 00:26:48.254 "data_size": 0 00:26:48.254 } 00:26:48.254 ] 00:26:48.254 }' 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.254 07:47:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.513 BaseBdev2 00:26:48.513 [2024-10-07 07:47:48.042934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.513 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.513 [ 00:26:48.513 { 00:26:48.513 "name": "BaseBdev2", 00:26:48.513 "aliases": [ 00:26:48.513 "021e7b15-0dec-4846-9947-91fc1d75b147" 00:26:48.513 ], 00:26:48.513 "product_name": "Malloc disk", 00:26:48.513 "block_size": 512, 00:26:48.513 "num_blocks": 65536, 00:26:48.513 "uuid": "021e7b15-0dec-4846-9947-91fc1d75b147", 00:26:48.513 "assigned_rate_limits": { 00:26:48.513 "rw_ios_per_sec": 0, 00:26:48.513 "rw_mbytes_per_sec": 0, 00:26:48.513 "r_mbytes_per_sec": 0, 00:26:48.513 "w_mbytes_per_sec": 0 00:26:48.513 }, 00:26:48.513 "claimed": true, 00:26:48.513 "claim_type": "exclusive_write", 00:26:48.513 "zoned": false, 00:26:48.513 "supported_io_types": { 00:26:48.513 "read": true, 00:26:48.513 "write": true, 00:26:48.513 "unmap": true, 00:26:48.513 "flush": true, 00:26:48.513 "reset": true, 00:26:48.513 "nvme_admin": false, 00:26:48.513 "nvme_io": false, 00:26:48.513 "nvme_io_md": false, 00:26:48.514 "write_zeroes": true, 00:26:48.514 "zcopy": true, 00:26:48.514 "get_zone_info": false, 00:26:48.514 "zone_management": false, 00:26:48.514 "zone_append": false, 00:26:48.514 "compare": false, 00:26:48.514 "compare_and_write": false, 00:26:48.514 "abort": true, 00:26:48.514 "seek_hole": false, 00:26:48.514 "seek_data": false, 00:26:48.773 "copy": true, 00:26:48.773 "nvme_iov_md": false 00:26:48.773 }, 00:26:48.773 "memory_domains": [ 00:26:48.773 { 00:26:48.773 "dma_device_id": "system", 00:26:48.773 "dma_device_type": 1 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.773 "dma_device_type": 2 00:26:48.773 } 00:26:48.773 ], 00:26:48.773 "driver_specific": {} 00:26:48.773 } 00:26:48.773 ] 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.773 "name": "Existed_Raid", 00:26:48.773 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:48.773 "strip_size_kb": 64, 00:26:48.773 "state": "configuring", 00:26:48.773 "raid_level": "raid5f", 00:26:48.773 "superblock": true, 00:26:48.773 "num_base_bdevs": 3, 00:26:48.773 "num_base_bdevs_discovered": 2, 00:26:48.773 "num_base_bdevs_operational": 3, 00:26:48.773 "base_bdevs_list": [ 00:26:48.773 { 00:26:48.773 "name": "BaseBdev1", 00:26:48.773 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 2048, 00:26:48.773 "data_size": 63488 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "name": "BaseBdev2", 00:26:48.773 "uuid": "021e7b15-0dec-4846-9947-91fc1d75b147", 00:26:48.773 "is_configured": true, 00:26:48.773 "data_offset": 2048, 00:26:48.773 "data_size": 63488 00:26:48.773 }, 00:26:48.773 { 00:26:48.773 "name": "BaseBdev3", 00:26:48.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.773 "is_configured": false, 00:26:48.773 "data_offset": 0, 00:26:48.773 "data_size": 0 00:26:48.773 } 00:26:48.773 ] 00:26:48.773 }' 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.773 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.031 BaseBdev3 00:26:49.031 [2024-10-07 07:47:48.551810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:49.031 [2024-10-07 07:47:48.552083] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:49.031 [2024-10-07 07:47:48.552113] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:49.031 [2024-10-07 07:47:48.552382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.031 [2024-10-07 07:47:48.559332] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:49.031 [2024-10-07 07:47:48.559496] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:49.031 [2024-10-07 07:47:48.559824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.031 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.031 [ 00:26:49.031 { 00:26:49.031 "name": "BaseBdev3", 00:26:49.031 "aliases": [ 00:26:49.031 "36effdb2-ee9b-47bf-959c-1304b10f69b4" 00:26:49.031 ], 00:26:49.031 "product_name": "Malloc disk", 00:26:49.031 "block_size": 512, 00:26:49.031 "num_blocks": 65536, 00:26:49.031 "uuid": "36effdb2-ee9b-47bf-959c-1304b10f69b4", 00:26:49.031 "assigned_rate_limits": { 00:26:49.031 "rw_ios_per_sec": 0, 00:26:49.031 "rw_mbytes_per_sec": 0, 00:26:49.031 "r_mbytes_per_sec": 0, 00:26:49.031 "w_mbytes_per_sec": 0 00:26:49.031 }, 00:26:49.031 "claimed": true, 00:26:49.031 "claim_type": "exclusive_write", 00:26:49.031 "zoned": false, 00:26:49.031 "supported_io_types": { 00:26:49.031 "read": true, 00:26:49.031 "write": true, 00:26:49.031 "unmap": true, 00:26:49.031 "flush": true, 00:26:49.031 "reset": true, 00:26:49.031 "nvme_admin": false, 00:26:49.031 "nvme_io": false, 00:26:49.031 "nvme_io_md": false, 00:26:49.031 "write_zeroes": true, 00:26:49.031 "zcopy": true, 00:26:49.031 "get_zone_info": false, 00:26:49.031 "zone_management": false, 00:26:49.031 "zone_append": false, 00:26:49.031 "compare": false, 00:26:49.031 "compare_and_write": false, 00:26:49.031 "abort": true, 00:26:49.031 "seek_hole": false, 00:26:49.031 "seek_data": false, 00:26:49.031 "copy": true, 00:26:49.031 "nvme_iov_md": false 00:26:49.031 }, 00:26:49.031 "memory_domains": [ 00:26:49.031 { 00:26:49.031 "dma_device_id": "system", 00:26:49.031 "dma_device_type": 1 00:26:49.031 }, 00:26:49.031 { 00:26:49.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.031 "dma_device_type": 2 00:26:49.031 } 00:26:49.031 ], 00:26:49.289 "driver_specific": {} 00:26:49.289 } 00:26:49.289 ] 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.289 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.289 "name": "Existed_Raid", 00:26:49.289 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:49.289 "strip_size_kb": 64, 00:26:49.289 "state": "online", 00:26:49.289 "raid_level": "raid5f", 00:26:49.289 "superblock": true, 00:26:49.289 "num_base_bdevs": 3, 00:26:49.289 "num_base_bdevs_discovered": 3, 00:26:49.289 "num_base_bdevs_operational": 3, 00:26:49.289 "base_bdevs_list": [ 00:26:49.289 { 00:26:49.289 "name": "BaseBdev1", 00:26:49.289 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:49.289 "is_configured": true, 00:26:49.289 "data_offset": 2048, 00:26:49.289 "data_size": 63488 00:26:49.289 }, 00:26:49.289 { 00:26:49.289 "name": "BaseBdev2", 00:26:49.289 "uuid": "021e7b15-0dec-4846-9947-91fc1d75b147", 00:26:49.289 "is_configured": true, 00:26:49.289 "data_offset": 2048, 00:26:49.289 "data_size": 63488 00:26:49.289 }, 00:26:49.290 { 00:26:49.290 "name": "BaseBdev3", 00:26:49.290 "uuid": "36effdb2-ee9b-47bf-959c-1304b10f69b4", 00:26:49.290 "is_configured": true, 00:26:49.290 "data_offset": 2048, 00:26:49.290 "data_size": 63488 00:26:49.290 } 00:26:49.290 ] 00:26:49.290 }' 00:26:49.290 07:47:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.290 07:47:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:49.549 [2024-10-07 07:47:49.047184] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.549 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.549 "name": "Existed_Raid", 00:26:49.549 "aliases": [ 00:26:49.549 "80018a96-df89-4480-9e47-4ea07365e315" 00:26:49.549 ], 00:26:49.549 "product_name": "Raid Volume", 00:26:49.549 "block_size": 512, 00:26:49.549 "num_blocks": 126976, 00:26:49.549 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:49.549 "assigned_rate_limits": { 00:26:49.549 "rw_ios_per_sec": 0, 00:26:49.549 "rw_mbytes_per_sec": 0, 00:26:49.549 "r_mbytes_per_sec": 0, 00:26:49.549 "w_mbytes_per_sec": 0 00:26:49.549 }, 00:26:49.549 "claimed": false, 00:26:49.549 "zoned": false, 00:26:49.549 "supported_io_types": { 00:26:49.549 "read": true, 00:26:49.549 "write": true, 00:26:49.549 "unmap": false, 00:26:49.549 "flush": false, 00:26:49.549 "reset": true, 00:26:49.549 "nvme_admin": false, 00:26:49.549 "nvme_io": false, 00:26:49.549 "nvme_io_md": false, 00:26:49.549 "write_zeroes": true, 00:26:49.549 "zcopy": false, 00:26:49.549 "get_zone_info": false, 00:26:49.549 "zone_management": false, 00:26:49.549 "zone_append": false, 00:26:49.549 "compare": false, 00:26:49.549 "compare_and_write": false, 00:26:49.549 "abort": false, 00:26:49.549 "seek_hole": false, 00:26:49.549 "seek_data": false, 00:26:49.549 "copy": false, 00:26:49.549 "nvme_iov_md": false 00:26:49.549 }, 00:26:49.549 "driver_specific": { 00:26:49.549 "raid": { 00:26:49.549 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:49.549 "strip_size_kb": 64, 00:26:49.549 "state": "online", 00:26:49.549 "raid_level": "raid5f", 00:26:49.549 "superblock": true, 00:26:49.549 "num_base_bdevs": 3, 00:26:49.549 "num_base_bdevs_discovered": 3, 00:26:49.549 "num_base_bdevs_operational": 3, 00:26:49.549 "base_bdevs_list": [ 00:26:49.549 { 00:26:49.549 "name": "BaseBdev1", 00:26:49.549 "uuid": "62fb35d3-a477-47c7-a6be-804515367361", 00:26:49.550 "is_configured": true, 00:26:49.550 "data_offset": 2048, 00:26:49.550 "data_size": 63488 00:26:49.550 }, 00:26:49.550 { 00:26:49.550 "name": "BaseBdev2", 00:26:49.550 "uuid": "021e7b15-0dec-4846-9947-91fc1d75b147", 00:26:49.550 "is_configured": true, 00:26:49.550 "data_offset": 2048, 00:26:49.550 "data_size": 63488 00:26:49.550 }, 00:26:49.550 { 00:26:49.550 "name": "BaseBdev3", 00:26:49.550 "uuid": "36effdb2-ee9b-47bf-959c-1304b10f69b4", 00:26:49.550 "is_configured": true, 00:26:49.550 "data_offset": 2048, 00:26:49.550 "data_size": 63488 00:26:49.550 } 00:26:49.550 ] 00:26:49.550 } 00:26:49.550 } 00:26:49.550 }' 00:26:49.550 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:49.809 BaseBdev2 00:26:49.809 BaseBdev3' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:49.809 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:49.810 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.810 [2024-10-07 07:47:49.335065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.069 "name": "Existed_Raid", 00:26:50.069 "uuid": "80018a96-df89-4480-9e47-4ea07365e315", 00:26:50.069 "strip_size_kb": 64, 00:26:50.069 "state": "online", 00:26:50.069 "raid_level": "raid5f", 00:26:50.069 "superblock": true, 00:26:50.069 "num_base_bdevs": 3, 00:26:50.069 "num_base_bdevs_discovered": 2, 00:26:50.069 "num_base_bdevs_operational": 2, 00:26:50.069 "base_bdevs_list": [ 00:26:50.069 { 00:26:50.069 "name": null, 00:26:50.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.069 "is_configured": false, 00:26:50.069 "data_offset": 0, 00:26:50.069 "data_size": 63488 00:26:50.069 }, 00:26:50.069 { 00:26:50.069 "name": "BaseBdev2", 00:26:50.069 "uuid": "021e7b15-0dec-4846-9947-91fc1d75b147", 00:26:50.069 "is_configured": true, 00:26:50.069 "data_offset": 2048, 00:26:50.069 "data_size": 63488 00:26:50.069 }, 00:26:50.069 { 00:26:50.069 "name": "BaseBdev3", 00:26:50.069 "uuid": "36effdb2-ee9b-47bf-959c-1304b10f69b4", 00:26:50.069 "is_configured": true, 00:26:50.069 "data_offset": 2048, 00:26:50.069 "data_size": 63488 00:26:50.069 } 00:26:50.069 ] 00:26:50.069 }' 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.069 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.638 07:47:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.638 [2024-10-07 07:47:49.945324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:50.638 [2024-10-07 07:47:49.945637] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:50.638 [2024-10-07 07:47:50.049593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.638 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.638 [2024-10-07 07:47:50.105675] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:50.638 [2024-10-07 07:47:50.105883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.898 BaseBdev2 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.898 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.898 [ 00:26:50.898 { 00:26:50.898 "name": "BaseBdev2", 00:26:50.898 "aliases": [ 00:26:50.898 "5023c0b7-17aa-4c9e-bb30-511a89cd694f" 00:26:50.898 ], 00:26:50.898 "product_name": "Malloc disk", 00:26:50.898 "block_size": 512, 00:26:50.898 "num_blocks": 65536, 00:26:50.898 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:50.898 "assigned_rate_limits": { 00:26:50.898 "rw_ios_per_sec": 0, 00:26:50.898 "rw_mbytes_per_sec": 0, 00:26:50.898 "r_mbytes_per_sec": 0, 00:26:50.898 "w_mbytes_per_sec": 0 00:26:50.898 }, 00:26:50.898 "claimed": false, 00:26:50.898 "zoned": false, 00:26:50.898 "supported_io_types": { 00:26:50.898 "read": true, 00:26:50.898 "write": true, 00:26:50.898 "unmap": true, 00:26:50.898 "flush": true, 00:26:50.898 "reset": true, 00:26:50.898 "nvme_admin": false, 00:26:50.898 "nvme_io": false, 00:26:50.898 "nvme_io_md": false, 00:26:50.898 "write_zeroes": true, 00:26:50.898 "zcopy": true, 00:26:50.898 "get_zone_info": false, 00:26:50.898 "zone_management": false, 00:26:50.898 "zone_append": false, 00:26:50.898 "compare": false, 00:26:50.898 "compare_and_write": false, 00:26:50.898 "abort": true, 00:26:50.898 "seek_hole": false, 00:26:50.898 "seek_data": false, 00:26:50.898 "copy": true, 00:26:50.898 "nvme_iov_md": false 00:26:50.898 }, 00:26:50.898 "memory_domains": [ 00:26:50.898 { 00:26:50.898 "dma_device_id": "system", 00:26:50.898 "dma_device_type": 1 00:26:50.898 }, 00:26:50.898 { 00:26:50.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.899 "dma_device_type": 2 00:26:50.899 } 00:26:50.899 ], 00:26:50.899 "driver_specific": {} 00:26:50.899 } 00:26:50.899 ] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.899 BaseBdev3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.899 [ 00:26:50.899 { 00:26:50.899 "name": "BaseBdev3", 00:26:50.899 "aliases": [ 00:26:50.899 "6d9c8577-debf-4794-9802-b2093be414ea" 00:26:50.899 ], 00:26:50.899 "product_name": "Malloc disk", 00:26:50.899 "block_size": 512, 00:26:50.899 "num_blocks": 65536, 00:26:50.899 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:50.899 "assigned_rate_limits": { 00:26:50.899 "rw_ios_per_sec": 0, 00:26:50.899 "rw_mbytes_per_sec": 0, 00:26:50.899 "r_mbytes_per_sec": 0, 00:26:50.899 "w_mbytes_per_sec": 0 00:26:50.899 }, 00:26:50.899 "claimed": false, 00:26:50.899 "zoned": false, 00:26:50.899 "supported_io_types": { 00:26:50.899 "read": true, 00:26:50.899 "write": true, 00:26:50.899 "unmap": true, 00:26:50.899 "flush": true, 00:26:50.899 "reset": true, 00:26:50.899 "nvme_admin": false, 00:26:50.899 "nvme_io": false, 00:26:50.899 "nvme_io_md": false, 00:26:50.899 "write_zeroes": true, 00:26:50.899 "zcopy": true, 00:26:50.899 "get_zone_info": false, 00:26:50.899 "zone_management": false, 00:26:50.899 "zone_append": false, 00:26:50.899 "compare": false, 00:26:50.899 "compare_and_write": false, 00:26:50.899 "abort": true, 00:26:50.899 "seek_hole": false, 00:26:50.899 "seek_data": false, 00:26:50.899 "copy": true, 00:26:50.899 "nvme_iov_md": false 00:26:50.899 }, 00:26:50.899 "memory_domains": [ 00:26:50.899 { 00:26:50.899 "dma_device_id": "system", 00:26:50.899 "dma_device_type": 1 00:26:50.899 }, 00:26:50.899 { 00:26:50.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.899 "dma_device_type": 2 00:26:50.899 } 00:26:50.899 ], 00:26:50.899 "driver_specific": {} 00:26:50.899 } 00:26:50.899 ] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.899 [2024-10-07 07:47:50.430141] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:50.899 [2024-10-07 07:47:50.430301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:50.899 [2024-10-07 07:47:50.430422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:50.899 [2024-10-07 07:47:50.432813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:50.899 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.159 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.159 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.159 "name": "Existed_Raid", 00:26:51.159 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:51.159 "strip_size_kb": 64, 00:26:51.159 "state": "configuring", 00:26:51.159 "raid_level": "raid5f", 00:26:51.159 "superblock": true, 00:26:51.159 "num_base_bdevs": 3, 00:26:51.159 "num_base_bdevs_discovered": 2, 00:26:51.159 "num_base_bdevs_operational": 3, 00:26:51.159 "base_bdevs_list": [ 00:26:51.159 { 00:26:51.159 "name": "BaseBdev1", 00:26:51.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.159 "is_configured": false, 00:26:51.159 "data_offset": 0, 00:26:51.159 "data_size": 0 00:26:51.159 }, 00:26:51.159 { 00:26:51.159 "name": "BaseBdev2", 00:26:51.159 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:51.159 "is_configured": true, 00:26:51.159 "data_offset": 2048, 00:26:51.159 "data_size": 63488 00:26:51.159 }, 00:26:51.159 { 00:26:51.159 "name": "BaseBdev3", 00:26:51.159 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:51.159 "is_configured": true, 00:26:51.159 "data_offset": 2048, 00:26:51.159 "data_size": 63488 00:26:51.159 } 00:26:51.159 ] 00:26:51.159 }' 00:26:51.159 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.159 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 [2024-10-07 07:47:50.886237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.418 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.418 "name": "Existed_Raid", 00:26:51.418 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:51.418 "strip_size_kb": 64, 00:26:51.418 "state": "configuring", 00:26:51.418 "raid_level": "raid5f", 00:26:51.418 "superblock": true, 00:26:51.418 "num_base_bdevs": 3, 00:26:51.418 "num_base_bdevs_discovered": 1, 00:26:51.418 "num_base_bdevs_operational": 3, 00:26:51.418 "base_bdevs_list": [ 00:26:51.418 { 00:26:51.418 "name": "BaseBdev1", 00:26:51.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.418 "is_configured": false, 00:26:51.418 "data_offset": 0, 00:26:51.418 "data_size": 0 00:26:51.418 }, 00:26:51.418 { 00:26:51.418 "name": null, 00:26:51.418 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:51.418 "is_configured": false, 00:26:51.418 "data_offset": 0, 00:26:51.418 "data_size": 63488 00:26:51.419 }, 00:26:51.419 { 00:26:51.419 "name": "BaseBdev3", 00:26:51.419 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:51.419 "is_configured": true, 00:26:51.419 "data_offset": 2048, 00:26:51.419 "data_size": 63488 00:26:51.419 } 00:26:51.419 ] 00:26:51.419 }' 00:26:51.419 07:47:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.419 07:47:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.988 [2024-10-07 07:47:51.434764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.988 BaseBdev1 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.988 [ 00:26:51.988 { 00:26:51.988 "name": "BaseBdev1", 00:26:51.988 "aliases": [ 00:26:51.988 "91867da3-3ae9-4941-af9a-ed49ad28daee" 00:26:51.988 ], 00:26:51.988 "product_name": "Malloc disk", 00:26:51.988 "block_size": 512, 00:26:51.988 "num_blocks": 65536, 00:26:51.988 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:51.988 "assigned_rate_limits": { 00:26:51.988 "rw_ios_per_sec": 0, 00:26:51.988 "rw_mbytes_per_sec": 0, 00:26:51.988 "r_mbytes_per_sec": 0, 00:26:51.988 "w_mbytes_per_sec": 0 00:26:51.988 }, 00:26:51.988 "claimed": true, 00:26:51.988 "claim_type": "exclusive_write", 00:26:51.988 "zoned": false, 00:26:51.988 "supported_io_types": { 00:26:51.988 "read": true, 00:26:51.988 "write": true, 00:26:51.988 "unmap": true, 00:26:51.988 "flush": true, 00:26:51.988 "reset": true, 00:26:51.988 "nvme_admin": false, 00:26:51.988 "nvme_io": false, 00:26:51.988 "nvme_io_md": false, 00:26:51.988 "write_zeroes": true, 00:26:51.988 "zcopy": true, 00:26:51.988 "get_zone_info": false, 00:26:51.988 "zone_management": false, 00:26:51.988 "zone_append": false, 00:26:51.988 "compare": false, 00:26:51.988 "compare_and_write": false, 00:26:51.988 "abort": true, 00:26:51.988 "seek_hole": false, 00:26:51.988 "seek_data": false, 00:26:51.988 "copy": true, 00:26:51.988 "nvme_iov_md": false 00:26:51.988 }, 00:26:51.988 "memory_domains": [ 00:26:51.988 { 00:26:51.988 "dma_device_id": "system", 00:26:51.988 "dma_device_type": 1 00:26:51.988 }, 00:26:51.988 { 00:26:51.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.988 "dma_device_type": 2 00:26:51.988 } 00:26:51.988 ], 00:26:51.988 "driver_specific": {} 00:26:51.988 } 00:26:51.988 ] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:51.988 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.989 "name": "Existed_Raid", 00:26:51.989 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:51.989 "strip_size_kb": 64, 00:26:51.989 "state": "configuring", 00:26:51.989 "raid_level": "raid5f", 00:26:51.989 "superblock": true, 00:26:51.989 "num_base_bdevs": 3, 00:26:51.989 "num_base_bdevs_discovered": 2, 00:26:51.989 "num_base_bdevs_operational": 3, 00:26:51.989 "base_bdevs_list": [ 00:26:51.989 { 00:26:51.989 "name": "BaseBdev1", 00:26:51.989 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:51.989 "is_configured": true, 00:26:51.989 "data_offset": 2048, 00:26:51.989 "data_size": 63488 00:26:51.989 }, 00:26:51.989 { 00:26:51.989 "name": null, 00:26:51.989 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:51.989 "is_configured": false, 00:26:51.989 "data_offset": 0, 00:26:51.989 "data_size": 63488 00:26:51.989 }, 00:26:51.989 { 00:26:51.989 "name": "BaseBdev3", 00:26:51.989 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:51.989 "is_configured": true, 00:26:51.989 "data_offset": 2048, 00:26:51.989 "data_size": 63488 00:26:51.989 } 00:26:51.989 ] 00:26:51.989 }' 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.989 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.558 [2024-10-07 07:47:51.963020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.558 07:47:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:52.558 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.558 "name": "Existed_Raid", 00:26:52.558 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:52.558 "strip_size_kb": 64, 00:26:52.558 "state": "configuring", 00:26:52.558 "raid_level": "raid5f", 00:26:52.558 "superblock": true, 00:26:52.558 "num_base_bdevs": 3, 00:26:52.558 "num_base_bdevs_discovered": 1, 00:26:52.558 "num_base_bdevs_operational": 3, 00:26:52.558 "base_bdevs_list": [ 00:26:52.558 { 00:26:52.558 "name": "BaseBdev1", 00:26:52.558 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:52.558 "is_configured": true, 00:26:52.558 "data_offset": 2048, 00:26:52.558 "data_size": 63488 00:26:52.558 }, 00:26:52.558 { 00:26:52.558 "name": null, 00:26:52.558 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:52.558 "is_configured": false, 00:26:52.558 "data_offset": 0, 00:26:52.558 "data_size": 63488 00:26:52.558 }, 00:26:52.558 { 00:26:52.558 "name": null, 00:26:52.558 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:52.558 "is_configured": false, 00:26:52.558 "data_offset": 0, 00:26:52.558 "data_size": 63488 00:26:52.558 } 00:26:52.558 ] 00:26:52.558 }' 00:26:52.558 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.558 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.128 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.128 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:53.128 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.129 [2024-10-07 07:47:52.495137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:53.129 "name": "Existed_Raid", 00:26:53.129 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:53.129 "strip_size_kb": 64, 00:26:53.129 "state": "configuring", 00:26:53.129 "raid_level": "raid5f", 00:26:53.129 "superblock": true, 00:26:53.129 "num_base_bdevs": 3, 00:26:53.129 "num_base_bdevs_discovered": 2, 00:26:53.129 "num_base_bdevs_operational": 3, 00:26:53.129 "base_bdevs_list": [ 00:26:53.129 { 00:26:53.129 "name": "BaseBdev1", 00:26:53.129 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:53.129 "is_configured": true, 00:26:53.129 "data_offset": 2048, 00:26:53.129 "data_size": 63488 00:26:53.129 }, 00:26:53.129 { 00:26:53.129 "name": null, 00:26:53.129 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:53.129 "is_configured": false, 00:26:53.129 "data_offset": 0, 00:26:53.129 "data_size": 63488 00:26:53.129 }, 00:26:53.129 { 00:26:53.129 "name": "BaseBdev3", 00:26:53.129 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:53.129 "is_configured": true, 00:26:53.129 "data_offset": 2048, 00:26:53.129 "data_size": 63488 00:26:53.129 } 00:26:53.129 ] 00:26:53.129 }' 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:53.129 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.698 07:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.698 [2024-10-07 07:47:52.995279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:53.699 "name": "Existed_Raid", 00:26:53.699 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:53.699 "strip_size_kb": 64, 00:26:53.699 "state": "configuring", 00:26:53.699 "raid_level": "raid5f", 00:26:53.699 "superblock": true, 00:26:53.699 "num_base_bdevs": 3, 00:26:53.699 "num_base_bdevs_discovered": 1, 00:26:53.699 "num_base_bdevs_operational": 3, 00:26:53.699 "base_bdevs_list": [ 00:26:53.699 { 00:26:53.699 "name": null, 00:26:53.699 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:53.699 "is_configured": false, 00:26:53.699 "data_offset": 0, 00:26:53.699 "data_size": 63488 00:26:53.699 }, 00:26:53.699 { 00:26:53.699 "name": null, 00:26:53.699 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:53.699 "is_configured": false, 00:26:53.699 "data_offset": 0, 00:26:53.699 "data_size": 63488 00:26:53.699 }, 00:26:53.699 { 00:26:53.699 "name": "BaseBdev3", 00:26:53.699 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:53.699 "is_configured": true, 00:26:53.699 "data_offset": 2048, 00:26:53.699 "data_size": 63488 00:26:53.699 } 00:26:53.699 ] 00:26:53.699 }' 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:53.699 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.268 [2024-10-07 07:47:53.592658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.268 "name": "Existed_Raid", 00:26:54.268 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:54.268 "strip_size_kb": 64, 00:26:54.268 "state": "configuring", 00:26:54.268 "raid_level": "raid5f", 00:26:54.268 "superblock": true, 00:26:54.268 "num_base_bdevs": 3, 00:26:54.268 "num_base_bdevs_discovered": 2, 00:26:54.268 "num_base_bdevs_operational": 3, 00:26:54.268 "base_bdevs_list": [ 00:26:54.268 { 00:26:54.268 "name": null, 00:26:54.268 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:54.268 "is_configured": false, 00:26:54.268 "data_offset": 0, 00:26:54.268 "data_size": 63488 00:26:54.268 }, 00:26:54.268 { 00:26:54.268 "name": "BaseBdev2", 00:26:54.268 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:54.268 "is_configured": true, 00:26:54.268 "data_offset": 2048, 00:26:54.268 "data_size": 63488 00:26:54.268 }, 00:26:54.268 { 00:26:54.268 "name": "BaseBdev3", 00:26:54.268 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:54.268 "is_configured": true, 00:26:54.268 "data_offset": 2048, 00:26:54.268 "data_size": 63488 00:26:54.268 } 00:26:54.268 ] 00:26:54.268 }' 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.268 07:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.527 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.527 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.527 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.527 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:54.527 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.787 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:54.787 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91867da3-3ae9-4941-af9a-ed49ad28daee 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 [2024-10-07 07:47:54.175118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:54.788 [2024-10-07 07:47:54.175367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:54.788 [2024-10-07 07:47:54.175389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:54.788 [2024-10-07 07:47:54.175664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:54.788 NewBaseBdev 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 [2024-10-07 07:47:54.181255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:54.788 [2024-10-07 07:47:54.181281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:54.788 [2024-10-07 07:47:54.181569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 [ 00:26:54.788 { 00:26:54.788 "name": "NewBaseBdev", 00:26:54.788 "aliases": [ 00:26:54.788 "91867da3-3ae9-4941-af9a-ed49ad28daee" 00:26:54.788 ], 00:26:54.788 "product_name": "Malloc disk", 00:26:54.788 "block_size": 512, 00:26:54.788 "num_blocks": 65536, 00:26:54.788 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:54.788 "assigned_rate_limits": { 00:26:54.788 "rw_ios_per_sec": 0, 00:26:54.788 "rw_mbytes_per_sec": 0, 00:26:54.788 "r_mbytes_per_sec": 0, 00:26:54.788 "w_mbytes_per_sec": 0 00:26:54.788 }, 00:26:54.788 "claimed": true, 00:26:54.788 "claim_type": "exclusive_write", 00:26:54.788 "zoned": false, 00:26:54.788 "supported_io_types": { 00:26:54.788 "read": true, 00:26:54.788 "write": true, 00:26:54.788 "unmap": true, 00:26:54.788 "flush": true, 00:26:54.788 "reset": true, 00:26:54.788 "nvme_admin": false, 00:26:54.788 "nvme_io": false, 00:26:54.788 "nvme_io_md": false, 00:26:54.788 "write_zeroes": true, 00:26:54.788 "zcopy": true, 00:26:54.788 "get_zone_info": false, 00:26:54.788 "zone_management": false, 00:26:54.788 "zone_append": false, 00:26:54.788 "compare": false, 00:26:54.788 "compare_and_write": false, 00:26:54.788 "abort": true, 00:26:54.788 "seek_hole": false, 00:26:54.788 "seek_data": false, 00:26:54.788 "copy": true, 00:26:54.788 "nvme_iov_md": false 00:26:54.788 }, 00:26:54.788 "memory_domains": [ 00:26:54.788 { 00:26:54.788 "dma_device_id": "system", 00:26:54.788 "dma_device_type": 1 00:26:54.788 }, 00:26:54.788 { 00:26:54.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.788 "dma_device_type": 2 00:26:54.788 } 00:26:54.788 ], 00:26:54.788 "driver_specific": {} 00:26:54.788 } 00:26:54.788 ] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:54.788 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.788 "name": "Existed_Raid", 00:26:54.788 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:54.788 "strip_size_kb": 64, 00:26:54.788 "state": "online", 00:26:54.788 "raid_level": "raid5f", 00:26:54.788 "superblock": true, 00:26:54.788 "num_base_bdevs": 3, 00:26:54.788 "num_base_bdevs_discovered": 3, 00:26:54.788 "num_base_bdevs_operational": 3, 00:26:54.788 "base_bdevs_list": [ 00:26:54.788 { 00:26:54.789 "name": "NewBaseBdev", 00:26:54.789 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:54.789 "is_configured": true, 00:26:54.789 "data_offset": 2048, 00:26:54.789 "data_size": 63488 00:26:54.789 }, 00:26:54.789 { 00:26:54.789 "name": "BaseBdev2", 00:26:54.789 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:54.789 "is_configured": true, 00:26:54.789 "data_offset": 2048, 00:26:54.789 "data_size": 63488 00:26:54.789 }, 00:26:54.789 { 00:26:54.789 "name": "BaseBdev3", 00:26:54.789 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:54.789 "is_configured": true, 00:26:54.789 "data_offset": 2048, 00:26:54.789 "data_size": 63488 00:26:54.789 } 00:26:54.789 ] 00:26:54.789 }' 00:26:54.789 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.789 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.358 [2024-10-07 07:47:54.652883] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:55.358 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.358 "name": "Existed_Raid", 00:26:55.358 "aliases": [ 00:26:55.358 "4d6bce19-36bc-43f4-a869-7ecccc65c815" 00:26:55.358 ], 00:26:55.358 "product_name": "Raid Volume", 00:26:55.358 "block_size": 512, 00:26:55.358 "num_blocks": 126976, 00:26:55.358 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:55.358 "assigned_rate_limits": { 00:26:55.358 "rw_ios_per_sec": 0, 00:26:55.358 "rw_mbytes_per_sec": 0, 00:26:55.358 "r_mbytes_per_sec": 0, 00:26:55.358 "w_mbytes_per_sec": 0 00:26:55.358 }, 00:26:55.358 "claimed": false, 00:26:55.358 "zoned": false, 00:26:55.358 "supported_io_types": { 00:26:55.358 "read": true, 00:26:55.358 "write": true, 00:26:55.358 "unmap": false, 00:26:55.358 "flush": false, 00:26:55.358 "reset": true, 00:26:55.358 "nvme_admin": false, 00:26:55.358 "nvme_io": false, 00:26:55.358 "nvme_io_md": false, 00:26:55.358 "write_zeroes": true, 00:26:55.358 "zcopy": false, 00:26:55.358 "get_zone_info": false, 00:26:55.358 "zone_management": false, 00:26:55.358 "zone_append": false, 00:26:55.358 "compare": false, 00:26:55.358 "compare_and_write": false, 00:26:55.358 "abort": false, 00:26:55.358 "seek_hole": false, 00:26:55.358 "seek_data": false, 00:26:55.358 "copy": false, 00:26:55.358 "nvme_iov_md": false 00:26:55.358 }, 00:26:55.358 "driver_specific": { 00:26:55.358 "raid": { 00:26:55.358 "uuid": "4d6bce19-36bc-43f4-a869-7ecccc65c815", 00:26:55.358 "strip_size_kb": 64, 00:26:55.358 "state": "online", 00:26:55.358 "raid_level": "raid5f", 00:26:55.358 "superblock": true, 00:26:55.358 "num_base_bdevs": 3, 00:26:55.358 "num_base_bdevs_discovered": 3, 00:26:55.358 "num_base_bdevs_operational": 3, 00:26:55.358 "base_bdevs_list": [ 00:26:55.358 { 00:26:55.358 "name": "NewBaseBdev", 00:26:55.358 "uuid": "91867da3-3ae9-4941-af9a-ed49ad28daee", 00:26:55.358 "is_configured": true, 00:26:55.358 "data_offset": 2048, 00:26:55.358 "data_size": 63488 00:26:55.358 }, 00:26:55.358 { 00:26:55.358 "name": "BaseBdev2", 00:26:55.358 "uuid": "5023c0b7-17aa-4c9e-bb30-511a89cd694f", 00:26:55.358 "is_configured": true, 00:26:55.358 "data_offset": 2048, 00:26:55.358 "data_size": 63488 00:26:55.358 }, 00:26:55.358 { 00:26:55.358 "name": "BaseBdev3", 00:26:55.358 "uuid": "6d9c8577-debf-4794-9802-b2093be414ea", 00:26:55.359 "is_configured": true, 00:26:55.359 "data_offset": 2048, 00:26:55.359 "data_size": 63488 00:26:55.359 } 00:26:55.359 ] 00:26:55.359 } 00:26:55.359 } 00:26:55.359 }' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:55.359 BaseBdev2 00:26:55.359 BaseBdev3' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.359 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.627 [2024-10-07 07:47:54.920700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:55.627 [2024-10-07 07:47:54.920743] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.627 [2024-10-07 07:47:54.920830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.627 [2024-10-07 07:47:54.921135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.627 [2024-10-07 07:47:54.921160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80743 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 80743 ']' 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 80743 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 80743 00:26:55.627 killing process with pid 80743 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 80743' 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 80743 00:26:55.627 [2024-10-07 07:47:54.963702] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:55.627 07:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 80743 00:26:55.900 [2024-10-07 07:47:55.288509] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:57.279 07:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:57.279 00:26:57.279 real 0m11.137s 00:26:57.279 user 0m17.740s 00:26:57.279 sys 0m1.979s 00:26:57.279 07:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:26:57.279 07:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.279 ************************************ 00:26:57.279 END TEST raid5f_state_function_test_sb 00:26:57.279 ************************************ 00:26:57.279 07:47:56 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:26:57.279 07:47:56 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:26:57.279 07:47:56 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:26:57.279 07:47:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.279 ************************************ 00:26:57.279 START TEST raid5f_superblock_test 00:26:57.279 ************************************ 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid5f 3 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81372 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81372 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 81372 ']' 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:26:57.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:26:57.279 07:47:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.279 [2024-10-07 07:47:56.773865] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:26:57.279 [2024-10-07 07:47:56.774002] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81372 ] 00:26:57.538 [2024-10-07 07:47:56.935400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.797 [2024-10-07 07:47:57.157933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.797 [2024-10-07 07:47:57.350986] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.797 [2024-10-07 07:47:57.351018] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 malloc1 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 [2024-10-07 07:47:57.728182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:58.366 [2024-10-07 07:47:57.728254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.366 [2024-10-07 07:47:57.728281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:58.366 [2024-10-07 07:47:57.728297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.366 [2024-10-07 07:47:57.730726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.366 [2024-10-07 07:47:57.730766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:58.366 pt1 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 malloc2 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 [2024-10-07 07:47:57.796178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:58.366 [2024-10-07 07:47:57.796246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.366 [2024-10-07 07:47:57.796275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:58.366 [2024-10-07 07:47:57.796287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.366 [2024-10-07 07:47:57.798781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.366 [2024-10-07 07:47:57.798820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:58.366 pt2 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.366 malloc3 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:58.366 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-10-07 07:47:57.846791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:58.367 [2024-10-07 07:47:57.846855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.367 [2024-10-07 07:47:57.846882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:58.367 [2024-10-07 07:47:57.846895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.367 [2024-10-07 07:47:57.849597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.367 [2024-10-07 07:47:57.849646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:58.367 pt3 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 [2024-10-07 07:47:57.858879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:58.367 [2024-10-07 07:47:57.861144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:58.367 [2024-10-07 07:47:57.861219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:58.367 [2024-10-07 07:47:57.861423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:58.367 [2024-10-07 07:47:57.861445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:58.367 [2024-10-07 07:47:57.861754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:58.367 [2024-10-07 07:47:57.867877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:58.367 [2024-10-07 07:47:57.867904] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:58.367 [2024-10-07 07:47:57.868157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.367 "name": "raid_bdev1", 00:26:58.367 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:26:58.367 "strip_size_kb": 64, 00:26:58.367 "state": "online", 00:26:58.367 "raid_level": "raid5f", 00:26:58.367 "superblock": true, 00:26:58.367 "num_base_bdevs": 3, 00:26:58.367 "num_base_bdevs_discovered": 3, 00:26:58.367 "num_base_bdevs_operational": 3, 00:26:58.367 "base_bdevs_list": [ 00:26:58.367 { 00:26:58.367 "name": "pt1", 00:26:58.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:58.367 "is_configured": true, 00:26:58.367 "data_offset": 2048, 00:26:58.367 "data_size": 63488 00:26:58.367 }, 00:26:58.367 { 00:26:58.367 "name": "pt2", 00:26:58.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:58.367 "is_configured": true, 00:26:58.367 "data_offset": 2048, 00:26:58.367 "data_size": 63488 00:26:58.367 }, 00:26:58.367 { 00:26:58.367 "name": "pt3", 00:26:58.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:58.367 "is_configured": true, 00:26:58.367 "data_offset": 2048, 00:26:58.367 "data_size": 63488 00:26:58.367 } 00:26:58.367 ] 00:26:58.367 }' 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.367 07:47:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:58.934 [2024-10-07 07:47:58.243820] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:58.934 "name": "raid_bdev1", 00:26:58.934 "aliases": [ 00:26:58.934 "d0d688ca-9cff-4adc-bee6-1575e251cd96" 00:26:58.934 ], 00:26:58.934 "product_name": "Raid Volume", 00:26:58.934 "block_size": 512, 00:26:58.934 "num_blocks": 126976, 00:26:58.934 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:26:58.934 "assigned_rate_limits": { 00:26:58.934 "rw_ios_per_sec": 0, 00:26:58.934 "rw_mbytes_per_sec": 0, 00:26:58.934 "r_mbytes_per_sec": 0, 00:26:58.934 "w_mbytes_per_sec": 0 00:26:58.934 }, 00:26:58.934 "claimed": false, 00:26:58.934 "zoned": false, 00:26:58.934 "supported_io_types": { 00:26:58.934 "read": true, 00:26:58.934 "write": true, 00:26:58.934 "unmap": false, 00:26:58.934 "flush": false, 00:26:58.934 "reset": true, 00:26:58.934 "nvme_admin": false, 00:26:58.934 "nvme_io": false, 00:26:58.934 "nvme_io_md": false, 00:26:58.934 "write_zeroes": true, 00:26:58.934 "zcopy": false, 00:26:58.934 "get_zone_info": false, 00:26:58.934 "zone_management": false, 00:26:58.934 "zone_append": false, 00:26:58.934 "compare": false, 00:26:58.934 "compare_and_write": false, 00:26:58.934 "abort": false, 00:26:58.934 "seek_hole": false, 00:26:58.934 "seek_data": false, 00:26:58.934 "copy": false, 00:26:58.934 "nvme_iov_md": false 00:26:58.934 }, 00:26:58.934 "driver_specific": { 00:26:58.934 "raid": { 00:26:58.934 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:26:58.934 "strip_size_kb": 64, 00:26:58.934 "state": "online", 00:26:58.934 "raid_level": "raid5f", 00:26:58.934 "superblock": true, 00:26:58.934 "num_base_bdevs": 3, 00:26:58.934 "num_base_bdevs_discovered": 3, 00:26:58.934 "num_base_bdevs_operational": 3, 00:26:58.934 "base_bdevs_list": [ 00:26:58.934 { 00:26:58.934 "name": "pt1", 00:26:58.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:58.934 "is_configured": true, 00:26:58.934 "data_offset": 2048, 00:26:58.934 "data_size": 63488 00:26:58.934 }, 00:26:58.934 { 00:26:58.934 "name": "pt2", 00:26:58.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:58.934 "is_configured": true, 00:26:58.934 "data_offset": 2048, 00:26:58.934 "data_size": 63488 00:26:58.934 }, 00:26:58.934 { 00:26:58.934 "name": "pt3", 00:26:58.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:58.934 "is_configured": true, 00:26:58.934 "data_offset": 2048, 00:26:58.934 "data_size": 63488 00:26:58.934 } 00:26:58.934 ] 00:26:58.934 } 00:26:58.934 } 00:26:58.934 }' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:58.934 pt2 00:26:58.934 pt3' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.934 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 [2024-10-07 07:47:58.507865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0d688ca-9cff-4adc-bee6-1575e251cd96 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d0d688ca-9cff-4adc-bee6-1575e251cd96 ']' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 [2024-10-07 07:47:58.551648] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:59.193 [2024-10-07 07:47:58.551686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:59.193 [2024-10-07 07:47:58.551787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:59.193 [2024-10-07 07:47:58.551868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:59.193 [2024-10-07 07:47:58.551880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.193 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.193 [2024-10-07 07:47:58.667724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:59.193 [2024-10-07 07:47:58.670033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:59.193 [2024-10-07 07:47:58.670093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:59.193 [2024-10-07 07:47:58.670150] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:59.193 [2024-10-07 07:47:58.670206] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:59.193 [2024-10-07 07:47:58.670228] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:59.194 [2024-10-07 07:47:58.670266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:59.194 [2024-10-07 07:47:58.670280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:59.194 request: 00:26:59.194 { 00:26:59.194 "name": "raid_bdev1", 00:26:59.194 "raid_level": "raid5f", 00:26:59.194 "base_bdevs": [ 00:26:59.194 "malloc1", 00:26:59.194 "malloc2", 00:26:59.194 "malloc3" 00:26:59.194 ], 00:26:59.194 "strip_size_kb": 64, 00:26:59.194 "superblock": false, 00:26:59.194 "method": "bdev_raid_create", 00:26:59.194 "req_id": 1 00:26:59.194 } 00:26:59.194 Got JSON-RPC error response 00:26:59.194 response: 00:26:59.194 { 00:26:59.194 "code": -17, 00:26:59.194 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:59.194 } 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.194 [2024-10-07 07:47:58.711690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:59.194 [2024-10-07 07:47:58.711772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.194 [2024-10-07 07:47:58.711796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:59.194 [2024-10-07 07:47:58.711810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.194 [2024-10-07 07:47:58.714530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.194 [2024-10-07 07:47:58.714578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:59.194 [2024-10-07 07:47:58.714676] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:59.194 [2024-10-07 07:47:58.714746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:59.194 pt1 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.194 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.453 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.453 "name": "raid_bdev1", 00:26:59.453 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:26:59.453 "strip_size_kb": 64, 00:26:59.453 "state": "configuring", 00:26:59.453 "raid_level": "raid5f", 00:26:59.453 "superblock": true, 00:26:59.453 "num_base_bdevs": 3, 00:26:59.453 "num_base_bdevs_discovered": 1, 00:26:59.453 "num_base_bdevs_operational": 3, 00:26:59.453 "base_bdevs_list": [ 00:26:59.453 { 00:26:59.453 "name": "pt1", 00:26:59.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.453 "is_configured": true, 00:26:59.453 "data_offset": 2048, 00:26:59.453 "data_size": 63488 00:26:59.453 }, 00:26:59.453 { 00:26:59.453 "name": null, 00:26:59.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.453 "is_configured": false, 00:26:59.453 "data_offset": 2048, 00:26:59.453 "data_size": 63488 00:26:59.453 }, 00:26:59.453 { 00:26:59.453 "name": null, 00:26:59.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:59.453 "is_configured": false, 00:26:59.453 "data_offset": 2048, 00:26:59.453 "data_size": 63488 00:26:59.453 } 00:26:59.453 ] 00:26:59.453 }' 00:26:59.453 07:47:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.453 07:47:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.713 [2024-10-07 07:47:59.203803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:59.713 [2024-10-07 07:47:59.203877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.713 [2024-10-07 07:47:59.203906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:59.713 [2024-10-07 07:47:59.203920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.713 [2024-10-07 07:47:59.204391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.713 [2024-10-07 07:47:59.204420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:59.713 [2024-10-07 07:47:59.204523] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:59.713 [2024-10-07 07:47:59.204548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:59.713 pt2 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.713 [2024-10-07 07:47:59.211830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.713 "name": "raid_bdev1", 00:26:59.713 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:26:59.713 "strip_size_kb": 64, 00:26:59.713 "state": "configuring", 00:26:59.713 "raid_level": "raid5f", 00:26:59.713 "superblock": true, 00:26:59.713 "num_base_bdevs": 3, 00:26:59.713 "num_base_bdevs_discovered": 1, 00:26:59.713 "num_base_bdevs_operational": 3, 00:26:59.713 "base_bdevs_list": [ 00:26:59.713 { 00:26:59.713 "name": "pt1", 00:26:59.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.713 "is_configured": true, 00:26:59.713 "data_offset": 2048, 00:26:59.713 "data_size": 63488 00:26:59.713 }, 00:26:59.713 { 00:26:59.713 "name": null, 00:26:59.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.713 "is_configured": false, 00:26:59.713 "data_offset": 0, 00:26:59.713 "data_size": 63488 00:26:59.713 }, 00:26:59.713 { 00:26:59.713 "name": null, 00:26:59.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:59.713 "is_configured": false, 00:26:59.713 "data_offset": 2048, 00:26:59.713 "data_size": 63488 00:26:59.713 } 00:26:59.713 ] 00:26:59.713 }' 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.713 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.282 [2024-10-07 07:47:59.663894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:00.282 [2024-10-07 07:47:59.663969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.282 [2024-10-07 07:47:59.663992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:00.282 [2024-10-07 07:47:59.664007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.282 [2024-10-07 07:47:59.664496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.282 [2024-10-07 07:47:59.664553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:00.282 [2024-10-07 07:47:59.664645] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:00.282 [2024-10-07 07:47:59.664674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:00.282 pt2 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.282 [2024-10-07 07:47:59.675927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:00.282 [2024-10-07 07:47:59.675985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.282 [2024-10-07 07:47:59.676004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:00.282 [2024-10-07 07:47:59.676017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.282 [2024-10-07 07:47:59.676470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.282 [2024-10-07 07:47:59.676520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:00.282 [2024-10-07 07:47:59.676600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:00.282 [2024-10-07 07:47:59.676627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:00.282 [2024-10-07 07:47:59.676801] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:00.282 [2024-10-07 07:47:59.676823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:00.282 [2024-10-07 07:47:59.677115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:00.282 [2024-10-07 07:47:59.682810] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:00.282 [2024-10-07 07:47:59.682835] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:00.282 pt3 00:27:00.282 [2024-10-07 07:47:59.683039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.282 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.283 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.283 "name": "raid_bdev1", 00:27:00.283 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:00.283 "strip_size_kb": 64, 00:27:00.283 "state": "online", 00:27:00.283 "raid_level": "raid5f", 00:27:00.283 "superblock": true, 00:27:00.283 "num_base_bdevs": 3, 00:27:00.283 "num_base_bdevs_discovered": 3, 00:27:00.283 "num_base_bdevs_operational": 3, 00:27:00.283 "base_bdevs_list": [ 00:27:00.283 { 00:27:00.283 "name": "pt1", 00:27:00.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:00.283 "is_configured": true, 00:27:00.283 "data_offset": 2048, 00:27:00.283 "data_size": 63488 00:27:00.283 }, 00:27:00.283 { 00:27:00.283 "name": "pt2", 00:27:00.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.283 "is_configured": true, 00:27:00.283 "data_offset": 2048, 00:27:00.283 "data_size": 63488 00:27:00.283 }, 00:27:00.283 { 00:27:00.283 "name": "pt3", 00:27:00.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:00.283 "is_configured": true, 00:27:00.283 "data_offset": 2048, 00:27:00.283 "data_size": 63488 00:27:00.283 } 00:27:00.283 ] 00:27:00.283 }' 00:27:00.283 07:47:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.283 07:47:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.852 [2024-10-07 07:48:00.126458] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.852 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:00.852 "name": "raid_bdev1", 00:27:00.852 "aliases": [ 00:27:00.852 "d0d688ca-9cff-4adc-bee6-1575e251cd96" 00:27:00.852 ], 00:27:00.852 "product_name": "Raid Volume", 00:27:00.852 "block_size": 512, 00:27:00.852 "num_blocks": 126976, 00:27:00.852 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:00.852 "assigned_rate_limits": { 00:27:00.852 "rw_ios_per_sec": 0, 00:27:00.852 "rw_mbytes_per_sec": 0, 00:27:00.852 "r_mbytes_per_sec": 0, 00:27:00.852 "w_mbytes_per_sec": 0 00:27:00.852 }, 00:27:00.852 "claimed": false, 00:27:00.852 "zoned": false, 00:27:00.852 "supported_io_types": { 00:27:00.852 "read": true, 00:27:00.852 "write": true, 00:27:00.852 "unmap": false, 00:27:00.852 "flush": false, 00:27:00.852 "reset": true, 00:27:00.852 "nvme_admin": false, 00:27:00.852 "nvme_io": false, 00:27:00.853 "nvme_io_md": false, 00:27:00.853 "write_zeroes": true, 00:27:00.853 "zcopy": false, 00:27:00.853 "get_zone_info": false, 00:27:00.853 "zone_management": false, 00:27:00.853 "zone_append": false, 00:27:00.853 "compare": false, 00:27:00.853 "compare_and_write": false, 00:27:00.853 "abort": false, 00:27:00.853 "seek_hole": false, 00:27:00.853 "seek_data": false, 00:27:00.853 "copy": false, 00:27:00.853 "nvme_iov_md": false 00:27:00.853 }, 00:27:00.853 "driver_specific": { 00:27:00.853 "raid": { 00:27:00.853 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:00.853 "strip_size_kb": 64, 00:27:00.853 "state": "online", 00:27:00.853 "raid_level": "raid5f", 00:27:00.853 "superblock": true, 00:27:00.853 "num_base_bdevs": 3, 00:27:00.853 "num_base_bdevs_discovered": 3, 00:27:00.853 "num_base_bdevs_operational": 3, 00:27:00.853 "base_bdevs_list": [ 00:27:00.853 { 00:27:00.853 "name": "pt1", 00:27:00.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:00.853 "is_configured": true, 00:27:00.853 "data_offset": 2048, 00:27:00.853 "data_size": 63488 00:27:00.853 }, 00:27:00.853 { 00:27:00.853 "name": "pt2", 00:27:00.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.853 "is_configured": true, 00:27:00.853 "data_offset": 2048, 00:27:00.853 "data_size": 63488 00:27:00.853 }, 00:27:00.853 { 00:27:00.853 "name": "pt3", 00:27:00.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:00.853 "is_configured": true, 00:27:00.853 "data_offset": 2048, 00:27:00.853 "data_size": 63488 00:27:00.853 } 00:27:00.853 ] 00:27:00.853 } 00:27:00.853 } 00:27:00.853 }' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:00.853 pt2 00:27:00.853 pt3' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:00.853 [2024-10-07 07:48:00.390503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:00.853 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d0d688ca-9cff-4adc-bee6-1575e251cd96 '!=' d0d688ca-9cff-4adc-bee6-1575e251cd96 ']' 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.121 [2024-10-07 07:48:00.434380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.121 "name": "raid_bdev1", 00:27:01.121 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:01.121 "strip_size_kb": 64, 00:27:01.121 "state": "online", 00:27:01.121 "raid_level": "raid5f", 00:27:01.121 "superblock": true, 00:27:01.121 "num_base_bdevs": 3, 00:27:01.121 "num_base_bdevs_discovered": 2, 00:27:01.121 "num_base_bdevs_operational": 2, 00:27:01.121 "base_bdevs_list": [ 00:27:01.121 { 00:27:01.121 "name": null, 00:27:01.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.121 "is_configured": false, 00:27:01.121 "data_offset": 0, 00:27:01.121 "data_size": 63488 00:27:01.121 }, 00:27:01.121 { 00:27:01.121 "name": "pt2", 00:27:01.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.121 "is_configured": true, 00:27:01.121 "data_offset": 2048, 00:27:01.121 "data_size": 63488 00:27:01.121 }, 00:27:01.121 { 00:27:01.121 "name": "pt3", 00:27:01.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.121 "is_configured": true, 00:27:01.121 "data_offset": 2048, 00:27:01.121 "data_size": 63488 00:27:01.121 } 00:27:01.121 ] 00:27:01.121 }' 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.121 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.381 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:01.381 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.381 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.381 [2024-10-07 07:48:00.874403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.381 [2024-10-07 07:48:00.874437] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:01.381 [2024-10-07 07:48:00.874517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.381 [2024-10-07 07:48:00.874576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.381 [2024-10-07 07:48:00.874593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.382 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.642 [2024-10-07 07:48:00.946392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:01.642 [2024-10-07 07:48:00.946457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.642 [2024-10-07 07:48:00.946477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:01.642 [2024-10-07 07:48:00.946491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.642 [2024-10-07 07:48:00.949114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.642 [2024-10-07 07:48:00.949160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:01.642 [2024-10-07 07:48:00.949243] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:01.642 [2024-10-07 07:48:00.949292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:01.642 pt2 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.642 07:48:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.642 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.642 "name": "raid_bdev1", 00:27:01.642 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:01.642 "strip_size_kb": 64, 00:27:01.642 "state": "configuring", 00:27:01.642 "raid_level": "raid5f", 00:27:01.642 "superblock": true, 00:27:01.642 "num_base_bdevs": 3, 00:27:01.642 "num_base_bdevs_discovered": 1, 00:27:01.642 "num_base_bdevs_operational": 2, 00:27:01.642 "base_bdevs_list": [ 00:27:01.642 { 00:27:01.642 "name": null, 00:27:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.642 "is_configured": false, 00:27:01.642 "data_offset": 2048, 00:27:01.642 "data_size": 63488 00:27:01.642 }, 00:27:01.642 { 00:27:01.642 "name": "pt2", 00:27:01.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.642 "is_configured": true, 00:27:01.642 "data_offset": 2048, 00:27:01.642 "data_size": 63488 00:27:01.642 }, 00:27:01.642 { 00:27:01.642 "name": null, 00:27:01.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.642 "is_configured": false, 00:27:01.642 "data_offset": 2048, 00:27:01.642 "data_size": 63488 00:27:01.642 } 00:27:01.642 ] 00:27:01.642 }' 00:27:01.642 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.642 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.901 [2024-10-07 07:48:01.406542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:01.901 [2024-10-07 07:48:01.406620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.901 [2024-10-07 07:48:01.406648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:01.901 [2024-10-07 07:48:01.406663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.901 [2024-10-07 07:48:01.407176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.901 [2024-10-07 07:48:01.407210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:01.901 [2024-10-07 07:48:01.407304] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:01.901 [2024-10-07 07:48:01.407341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:01.901 [2024-10-07 07:48:01.407460] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:01.901 [2024-10-07 07:48:01.407475] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:01.901 [2024-10-07 07:48:01.407758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:01.901 [2024-10-07 07:48:01.413484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:01.901 [2024-10-07 07:48:01.413514] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:01.901 [2024-10-07 07:48:01.413879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.901 pt3 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.901 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.902 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:01.902 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.902 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.902 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.160 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.160 "name": "raid_bdev1", 00:27:02.160 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:02.160 "strip_size_kb": 64, 00:27:02.160 "state": "online", 00:27:02.160 "raid_level": "raid5f", 00:27:02.160 "superblock": true, 00:27:02.160 "num_base_bdevs": 3, 00:27:02.160 "num_base_bdevs_discovered": 2, 00:27:02.160 "num_base_bdevs_operational": 2, 00:27:02.160 "base_bdevs_list": [ 00:27:02.160 { 00:27:02.160 "name": null, 00:27:02.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.160 "is_configured": false, 00:27:02.160 "data_offset": 2048, 00:27:02.160 "data_size": 63488 00:27:02.160 }, 00:27:02.160 { 00:27:02.160 "name": "pt2", 00:27:02.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.160 "is_configured": true, 00:27:02.160 "data_offset": 2048, 00:27:02.160 "data_size": 63488 00:27:02.160 }, 00:27:02.160 { 00:27:02.160 "name": "pt3", 00:27:02.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.160 "is_configured": true, 00:27:02.160 "data_offset": 2048, 00:27:02.160 "data_size": 63488 00:27:02.160 } 00:27:02.160 ] 00:27:02.160 }' 00:27:02.160 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.160 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.419 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:02.419 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.419 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.419 [2024-10-07 07:48:01.861914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.419 [2024-10-07 07:48:01.861957] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.419 [2024-10-07 07:48:01.862039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.419 [2024-10-07 07:48:01.862111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.419 [2024-10-07 07:48:01.862125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.420 [2024-10-07 07:48:01.929972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:02.420 [2024-10-07 07:48:01.930068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.420 [2024-10-07 07:48:01.930096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:02.420 [2024-10-07 07:48:01.930111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.420 [2024-10-07 07:48:01.933066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.420 [2024-10-07 07:48:01.933116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:02.420 [2024-10-07 07:48:01.933232] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:02.420 [2024-10-07 07:48:01.933284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.420 [2024-10-07 07:48:01.933437] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:02.420 [2024-10-07 07:48:01.933463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.420 [2024-10-07 07:48:01.933485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:02.420 [2024-10-07 07:48:01.933557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.420 pt1 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.420 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.679 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.679 "name": "raid_bdev1", 00:27:02.679 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:02.679 "strip_size_kb": 64, 00:27:02.679 "state": "configuring", 00:27:02.679 "raid_level": "raid5f", 00:27:02.679 "superblock": true, 00:27:02.679 "num_base_bdevs": 3, 00:27:02.679 "num_base_bdevs_discovered": 1, 00:27:02.679 "num_base_bdevs_operational": 2, 00:27:02.679 "base_bdevs_list": [ 00:27:02.679 { 00:27:02.679 "name": null, 00:27:02.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.679 "is_configured": false, 00:27:02.679 "data_offset": 2048, 00:27:02.679 "data_size": 63488 00:27:02.679 }, 00:27:02.679 { 00:27:02.679 "name": "pt2", 00:27:02.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.679 "is_configured": true, 00:27:02.679 "data_offset": 2048, 00:27:02.679 "data_size": 63488 00:27:02.679 }, 00:27:02.679 { 00:27:02.679 "name": null, 00:27:02.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.679 "is_configured": false, 00:27:02.679 "data_offset": 2048, 00:27:02.679 "data_size": 63488 00:27:02.679 } 00:27:02.679 ] 00:27:02.679 }' 00:27:02.679 07:48:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.679 07:48:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.938 [2024-10-07 07:48:02.458097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:02.938 [2024-10-07 07:48:02.458166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.938 [2024-10-07 07:48:02.458194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:02.938 [2024-10-07 07:48:02.458207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.938 [2024-10-07 07:48:02.458723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.938 [2024-10-07 07:48:02.458751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:02.938 [2024-10-07 07:48:02.458845] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:02.938 [2024-10-07 07:48:02.458871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:02.938 [2024-10-07 07:48:02.459005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:02.938 [2024-10-07 07:48:02.459023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:02.938 [2024-10-07 07:48:02.459332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:02.938 [2024-10-07 07:48:02.465683] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:02.938 [2024-10-07 07:48:02.465731] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:02.938 [2024-10-07 07:48:02.466046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.938 pt3 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:02.938 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.939 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:03.197 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.197 "name": "raid_bdev1", 00:27:03.197 "uuid": "d0d688ca-9cff-4adc-bee6-1575e251cd96", 00:27:03.197 "strip_size_kb": 64, 00:27:03.197 "state": "online", 00:27:03.197 "raid_level": "raid5f", 00:27:03.197 "superblock": true, 00:27:03.197 "num_base_bdevs": 3, 00:27:03.197 "num_base_bdevs_discovered": 2, 00:27:03.197 "num_base_bdevs_operational": 2, 00:27:03.197 "base_bdevs_list": [ 00:27:03.197 { 00:27:03.197 "name": null, 00:27:03.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.197 "is_configured": false, 00:27:03.197 "data_offset": 2048, 00:27:03.197 "data_size": 63488 00:27:03.197 }, 00:27:03.197 { 00:27:03.197 "name": "pt2", 00:27:03.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.197 "is_configured": true, 00:27:03.197 "data_offset": 2048, 00:27:03.197 "data_size": 63488 00:27:03.197 }, 00:27:03.197 { 00:27:03.197 "name": "pt3", 00:27:03.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.197 "is_configured": true, 00:27:03.197 "data_offset": 2048, 00:27:03.197 "data_size": 63488 00:27:03.197 } 00:27:03.197 ] 00:27:03.197 }' 00:27:03.197 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.197 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.457 [2024-10-07 07:48:02.954087] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d0d688ca-9cff-4adc-bee6-1575e251cd96 '!=' d0d688ca-9cff-4adc-bee6-1575e251cd96 ']' 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81372 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 81372 ']' 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # kill -0 81372 00:27:03.457 07:48:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # uname 00:27:03.457 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:03.457 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 81372 00:27:03.716 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:03.716 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:03.716 killing process with pid 81372 00:27:03.716 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 81372' 00:27:03.716 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # kill 81372 00:27:03.716 [2024-10-07 07:48:03.039218] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:03.716 [2024-10-07 07:48:03.039316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:03.716 07:48:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@977 -- # wait 81372 00:27:03.716 [2024-10-07 07:48:03.039380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:03.716 [2024-10-07 07:48:03.039395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:03.976 [2024-10-07 07:48:03.367123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:05.356 07:48:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:05.356 00:27:05.356 real 0m8.037s 00:27:05.356 user 0m12.478s 00:27:05.356 sys 0m1.464s 00:27:05.356 07:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:05.356 ************************************ 00:27:05.356 END TEST raid5f_superblock_test 00:27:05.356 ************************************ 00:27:05.356 07:48:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.356 07:48:04 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:27:05.356 07:48:04 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:27:05.356 07:48:04 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:27:05.356 07:48:04 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:05.356 07:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:05.356 ************************************ 00:27:05.356 START TEST raid5f_rebuild_test 00:27:05.356 ************************************ 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid5f 3 false false true 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:05.356 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81822 00:27:05.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81822 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # '[' -z 81822 ']' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:05.357 07:48:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.357 [2024-10-07 07:48:04.884777] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:05.357 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:05.357 Zero copy mechanism will not be used. 00:27:05.357 [2024-10-07 07:48:04.885162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:27:05.616 [2024-10-07 07:48:05.044619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.875 [2024-10-07 07:48:05.259954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.136 [2024-10-07 07:48:05.479059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.136 [2024-10-07 07:48:05.479128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # return 0 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 BaseBdev1_malloc 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 [2024-10-07 07:48:05.851773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:06.396 [2024-10-07 07:48:05.851998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.396 [2024-10-07 07:48:05.852066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:06.396 [2024-10-07 07:48:05.852170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.396 [2024-10-07 07:48:05.854925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.396 [2024-10-07 07:48:05.855089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:06.396 BaseBdev1 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 BaseBdev2_malloc 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 [2024-10-07 07:48:05.919838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:06.396 [2024-10-07 07:48:05.920039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.396 [2024-10-07 07:48:05.920097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:06.396 [2024-10-07 07:48:05.920192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.396 [2024-10-07 07:48:05.922773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.396 [2024-10-07 07:48:05.922818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:06.396 BaseBdev2 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.396 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 BaseBdev3_malloc 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 [2024-10-07 07:48:05.975888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:06.656 [2024-10-07 07:48:05.976073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.656 [2024-10-07 07:48:05.976108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:06.656 [2024-10-07 07:48:05.976123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.656 [2024-10-07 07:48:05.978626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.656 [2024-10-07 07:48:05.978672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:06.656 BaseBdev3 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.656 07:48:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 spare_malloc 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 spare_delay 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 [2024-10-07 07:48:06.036325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:06.656 [2024-10-07 07:48:06.036506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.656 [2024-10-07 07:48:06.036553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:06.656 [2024-10-07 07:48:06.036569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.656 [2024-10-07 07:48:06.039233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.656 spare 00:27:06.656 [2024-10-07 07:48:06.039380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 [2024-10-07 07:48:06.044406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:06.656 [2024-10-07 07:48:06.046613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:06.656 [2024-10-07 07:48:06.046678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:06.656 [2024-10-07 07:48:06.046782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:06.656 [2024-10-07 07:48:06.046793] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:27:06.656 [2024-10-07 07:48:06.047079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:06.656 [2024-10-07 07:48:06.053011] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:06.656 [2024-10-07 07:48:06.053145] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:06.656 [2024-10-07 07:48:06.053390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:06.656 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.657 "name": "raid_bdev1", 00:27:06.657 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:06.657 "strip_size_kb": 64, 00:27:06.657 "state": "online", 00:27:06.657 "raid_level": "raid5f", 00:27:06.657 "superblock": false, 00:27:06.657 "num_base_bdevs": 3, 00:27:06.657 "num_base_bdevs_discovered": 3, 00:27:06.657 "num_base_bdevs_operational": 3, 00:27:06.657 "base_bdevs_list": [ 00:27:06.657 { 00:27:06.657 "name": "BaseBdev1", 00:27:06.657 "uuid": "c5e326e4-48f5-5bfc-b762-77765c164ae0", 00:27:06.657 "is_configured": true, 00:27:06.657 "data_offset": 0, 00:27:06.657 "data_size": 65536 00:27:06.657 }, 00:27:06.657 { 00:27:06.657 "name": "BaseBdev2", 00:27:06.657 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:06.657 "is_configured": true, 00:27:06.657 "data_offset": 0, 00:27:06.657 "data_size": 65536 00:27:06.657 }, 00:27:06.657 { 00:27:06.657 "name": "BaseBdev3", 00:27:06.657 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:06.657 "is_configured": true, 00:27:06.657 "data_offset": 0, 00:27:06.657 "data_size": 65536 00:27:06.657 } 00:27:06.657 ] 00:27:06.657 }' 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.657 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:07.226 [2024-10-07 07:48:06.504493] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:07.226 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:07.486 [2024-10-07 07:48:06.868409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:07.486 /dev/nbd0 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:07.486 1+0 records in 00:27:07.486 1+0 records out 00:27:07.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316522 s, 12.9 MB/s 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:27:07.486 07:48:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:27:08.057 512+0 records in 00:27:08.057 512+0 records out 00:27:08.057 67108864 bytes (67 MB, 64 MiB) copied, 0.437764 s, 153 MB/s 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:08.057 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:08.316 [2024-10-07 07:48:07.639358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.316 [2024-10-07 07:48:07.679161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.316 "name": "raid_bdev1", 00:27:08.316 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:08.316 "strip_size_kb": 64, 00:27:08.316 "state": "online", 00:27:08.316 "raid_level": "raid5f", 00:27:08.316 "superblock": false, 00:27:08.316 "num_base_bdevs": 3, 00:27:08.316 "num_base_bdevs_discovered": 2, 00:27:08.316 "num_base_bdevs_operational": 2, 00:27:08.316 "base_bdevs_list": [ 00:27:08.316 { 00:27:08.316 "name": null, 00:27:08.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.316 "is_configured": false, 00:27:08.316 "data_offset": 0, 00:27:08.316 "data_size": 65536 00:27:08.316 }, 00:27:08.316 { 00:27:08.316 "name": "BaseBdev2", 00:27:08.316 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:08.316 "is_configured": true, 00:27:08.316 "data_offset": 0, 00:27:08.316 "data_size": 65536 00:27:08.316 }, 00:27:08.316 { 00:27:08.316 "name": "BaseBdev3", 00:27:08.316 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:08.316 "is_configured": true, 00:27:08.316 "data_offset": 0, 00:27:08.316 "data_size": 65536 00:27:08.316 } 00:27:08.316 ] 00:27:08.316 }' 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.316 07:48:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.885 07:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:08.885 07:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:08.885 07:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.885 [2024-10-07 07:48:08.147287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:08.885 [2024-10-07 07:48:08.165453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:27:08.885 07:48:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:08.885 07:48:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:08.885 [2024-10-07 07:48:08.175002] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:09.821 "name": "raid_bdev1", 00:27:09.821 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:09.821 "strip_size_kb": 64, 00:27:09.821 "state": "online", 00:27:09.821 "raid_level": "raid5f", 00:27:09.821 "superblock": false, 00:27:09.821 "num_base_bdevs": 3, 00:27:09.821 "num_base_bdevs_discovered": 3, 00:27:09.821 "num_base_bdevs_operational": 3, 00:27:09.821 "process": { 00:27:09.821 "type": "rebuild", 00:27:09.821 "target": "spare", 00:27:09.821 "progress": { 00:27:09.821 "blocks": 18432, 00:27:09.821 "percent": 14 00:27:09.821 } 00:27:09.821 }, 00:27:09.821 "base_bdevs_list": [ 00:27:09.821 { 00:27:09.821 "name": "spare", 00:27:09.821 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:09.821 "is_configured": true, 00:27:09.821 "data_offset": 0, 00:27:09.821 "data_size": 65536 00:27:09.821 }, 00:27:09.821 { 00:27:09.821 "name": "BaseBdev2", 00:27:09.821 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:09.821 "is_configured": true, 00:27:09.821 "data_offset": 0, 00:27:09.821 "data_size": 65536 00:27:09.821 }, 00:27:09.821 { 00:27:09.821 "name": "BaseBdev3", 00:27:09.821 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:09.821 "is_configured": true, 00:27:09.821 "data_offset": 0, 00:27:09.821 "data_size": 65536 00:27:09.821 } 00:27:09.821 ] 00:27:09.821 }' 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:09.821 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.822 [2024-10-07 07:48:09.320634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.081 [2024-10-07 07:48:09.387145] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:10.081 [2024-10-07 07:48:09.387382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.081 [2024-10-07 07:48:09.387416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.081 [2024-10-07 07:48:09.387429] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.081 "name": "raid_bdev1", 00:27:10.081 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:10.081 "strip_size_kb": 64, 00:27:10.081 "state": "online", 00:27:10.081 "raid_level": "raid5f", 00:27:10.081 "superblock": false, 00:27:10.081 "num_base_bdevs": 3, 00:27:10.081 "num_base_bdevs_discovered": 2, 00:27:10.081 "num_base_bdevs_operational": 2, 00:27:10.081 "base_bdevs_list": [ 00:27:10.081 { 00:27:10.081 "name": null, 00:27:10.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.081 "is_configured": false, 00:27:10.081 "data_offset": 0, 00:27:10.081 "data_size": 65536 00:27:10.081 }, 00:27:10.081 { 00:27:10.081 "name": "BaseBdev2", 00:27:10.081 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:10.081 "is_configured": true, 00:27:10.081 "data_offset": 0, 00:27:10.081 "data_size": 65536 00:27:10.081 }, 00:27:10.081 { 00:27:10.081 "name": "BaseBdev3", 00:27:10.081 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:10.081 "is_configured": true, 00:27:10.081 "data_offset": 0, 00:27:10.081 "data_size": 65536 00:27:10.081 } 00:27:10.081 ] 00:27:10.081 }' 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.081 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.341 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:10.341 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:10.341 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:10.341 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:10.341 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:10.600 "name": "raid_bdev1", 00:27:10.600 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:10.600 "strip_size_kb": 64, 00:27:10.600 "state": "online", 00:27:10.600 "raid_level": "raid5f", 00:27:10.600 "superblock": false, 00:27:10.600 "num_base_bdevs": 3, 00:27:10.600 "num_base_bdevs_discovered": 2, 00:27:10.600 "num_base_bdevs_operational": 2, 00:27:10.600 "base_bdevs_list": [ 00:27:10.600 { 00:27:10.600 "name": null, 00:27:10.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.600 "is_configured": false, 00:27:10.600 "data_offset": 0, 00:27:10.600 "data_size": 65536 00:27:10.600 }, 00:27:10.600 { 00:27:10.600 "name": "BaseBdev2", 00:27:10.600 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:10.600 "is_configured": true, 00:27:10.600 "data_offset": 0, 00:27:10.600 "data_size": 65536 00:27:10.600 }, 00:27:10.600 { 00:27:10.600 "name": "BaseBdev3", 00:27:10.600 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:10.600 "is_configured": true, 00:27:10.600 "data_offset": 0, 00:27:10.600 "data_size": 65536 00:27:10.600 } 00:27:10.600 ] 00:27:10.600 }' 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:10.600 07:48:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.600 [2024-10-07 07:48:10.042006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:10.600 [2024-10-07 07:48:10.058425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:10.600 07:48:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:10.600 [2024-10-07 07:48:10.067407] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.539 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:11.798 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:11.798 "name": "raid_bdev1", 00:27:11.798 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:11.798 "strip_size_kb": 64, 00:27:11.798 "state": "online", 00:27:11.798 "raid_level": "raid5f", 00:27:11.798 "superblock": false, 00:27:11.798 "num_base_bdevs": 3, 00:27:11.798 "num_base_bdevs_discovered": 3, 00:27:11.798 "num_base_bdevs_operational": 3, 00:27:11.798 "process": { 00:27:11.798 "type": "rebuild", 00:27:11.798 "target": "spare", 00:27:11.798 "progress": { 00:27:11.798 "blocks": 18432, 00:27:11.798 "percent": 14 00:27:11.798 } 00:27:11.798 }, 00:27:11.798 "base_bdevs_list": [ 00:27:11.798 { 00:27:11.798 "name": "spare", 00:27:11.798 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:11.798 "is_configured": true, 00:27:11.798 "data_offset": 0, 00:27:11.798 "data_size": 65536 00:27:11.798 }, 00:27:11.798 { 00:27:11.798 "name": "BaseBdev2", 00:27:11.798 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:11.798 "is_configured": true, 00:27:11.798 "data_offset": 0, 00:27:11.798 "data_size": 65536 00:27:11.798 }, 00:27:11.798 { 00:27:11.798 "name": "BaseBdev3", 00:27:11.798 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:11.798 "is_configured": true, 00:27:11.798 "data_offset": 0, 00:27:11.798 "data_size": 65536 00:27:11.798 } 00:27:11.798 ] 00:27:11.798 }' 00:27:11.798 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:11.798 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.798 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=581 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:11.799 "name": "raid_bdev1", 00:27:11.799 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:11.799 "strip_size_kb": 64, 00:27:11.799 "state": "online", 00:27:11.799 "raid_level": "raid5f", 00:27:11.799 "superblock": false, 00:27:11.799 "num_base_bdevs": 3, 00:27:11.799 "num_base_bdevs_discovered": 3, 00:27:11.799 "num_base_bdevs_operational": 3, 00:27:11.799 "process": { 00:27:11.799 "type": "rebuild", 00:27:11.799 "target": "spare", 00:27:11.799 "progress": { 00:27:11.799 "blocks": 22528, 00:27:11.799 "percent": 17 00:27:11.799 } 00:27:11.799 }, 00:27:11.799 "base_bdevs_list": [ 00:27:11.799 { 00:27:11.799 "name": "spare", 00:27:11.799 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 0, 00:27:11.799 "data_size": 65536 00:27:11.799 }, 00:27:11.799 { 00:27:11.799 "name": "BaseBdev2", 00:27:11.799 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 0, 00:27:11.799 "data_size": 65536 00:27:11.799 }, 00:27:11.799 { 00:27:11.799 "name": "BaseBdev3", 00:27:11.799 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:11.799 "is_configured": true, 00:27:11.799 "data_offset": 0, 00:27:11.799 "data_size": 65536 00:27:11.799 } 00:27:11.799 ] 00:27:11.799 }' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.799 07:48:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:13.179 "name": "raid_bdev1", 00:27:13.179 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:13.179 "strip_size_kb": 64, 00:27:13.179 "state": "online", 00:27:13.179 "raid_level": "raid5f", 00:27:13.179 "superblock": false, 00:27:13.179 "num_base_bdevs": 3, 00:27:13.179 "num_base_bdevs_discovered": 3, 00:27:13.179 "num_base_bdevs_operational": 3, 00:27:13.179 "process": { 00:27:13.179 "type": "rebuild", 00:27:13.179 "target": "spare", 00:27:13.179 "progress": { 00:27:13.179 "blocks": 45056, 00:27:13.179 "percent": 34 00:27:13.179 } 00:27:13.179 }, 00:27:13.179 "base_bdevs_list": [ 00:27:13.179 { 00:27:13.179 "name": "spare", 00:27:13.179 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:13.179 "is_configured": true, 00:27:13.179 "data_offset": 0, 00:27:13.179 "data_size": 65536 00:27:13.179 }, 00:27:13.179 { 00:27:13.179 "name": "BaseBdev2", 00:27:13.179 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:13.179 "is_configured": true, 00:27:13.179 "data_offset": 0, 00:27:13.179 "data_size": 65536 00:27:13.179 }, 00:27:13.179 { 00:27:13.179 "name": "BaseBdev3", 00:27:13.179 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:13.179 "is_configured": true, 00:27:13.179 "data_offset": 0, 00:27:13.179 "data_size": 65536 00:27:13.179 } 00:27:13.179 ] 00:27:13.179 }' 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.179 07:48:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:14.118 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:14.118 "name": "raid_bdev1", 00:27:14.118 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:14.118 "strip_size_kb": 64, 00:27:14.118 "state": "online", 00:27:14.118 "raid_level": "raid5f", 00:27:14.118 "superblock": false, 00:27:14.118 "num_base_bdevs": 3, 00:27:14.118 "num_base_bdevs_discovered": 3, 00:27:14.118 "num_base_bdevs_operational": 3, 00:27:14.118 "process": { 00:27:14.118 "type": "rebuild", 00:27:14.118 "target": "spare", 00:27:14.118 "progress": { 00:27:14.118 "blocks": 69632, 00:27:14.118 "percent": 53 00:27:14.118 } 00:27:14.118 }, 00:27:14.118 "base_bdevs_list": [ 00:27:14.118 { 00:27:14.118 "name": "spare", 00:27:14.118 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:14.118 "is_configured": true, 00:27:14.118 "data_offset": 0, 00:27:14.118 "data_size": 65536 00:27:14.118 }, 00:27:14.118 { 00:27:14.118 "name": "BaseBdev2", 00:27:14.118 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:14.118 "is_configured": true, 00:27:14.118 "data_offset": 0, 00:27:14.118 "data_size": 65536 00:27:14.119 }, 00:27:14.119 { 00:27:14.119 "name": "BaseBdev3", 00:27:14.119 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:14.119 "is_configured": true, 00:27:14.119 "data_offset": 0, 00:27:14.119 "data_size": 65536 00:27:14.119 } 00:27:14.119 ] 00:27:14.119 }' 00:27:14.119 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:14.119 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.119 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:14.119 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.119 07:48:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.500 "name": "raid_bdev1", 00:27:15.500 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:15.500 "strip_size_kb": 64, 00:27:15.500 "state": "online", 00:27:15.500 "raid_level": "raid5f", 00:27:15.500 "superblock": false, 00:27:15.500 "num_base_bdevs": 3, 00:27:15.500 "num_base_bdevs_discovered": 3, 00:27:15.500 "num_base_bdevs_operational": 3, 00:27:15.500 "process": { 00:27:15.500 "type": "rebuild", 00:27:15.500 "target": "spare", 00:27:15.500 "progress": { 00:27:15.500 "blocks": 92160, 00:27:15.500 "percent": 70 00:27:15.500 } 00:27:15.500 }, 00:27:15.500 "base_bdevs_list": [ 00:27:15.500 { 00:27:15.500 "name": "spare", 00:27:15.500 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:15.500 "is_configured": true, 00:27:15.500 "data_offset": 0, 00:27:15.500 "data_size": 65536 00:27:15.500 }, 00:27:15.500 { 00:27:15.500 "name": "BaseBdev2", 00:27:15.500 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:15.500 "is_configured": true, 00:27:15.500 "data_offset": 0, 00:27:15.500 "data_size": 65536 00:27:15.500 }, 00:27:15.500 { 00:27:15.500 "name": "BaseBdev3", 00:27:15.500 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:15.500 "is_configured": true, 00:27:15.500 "data_offset": 0, 00:27:15.500 "data_size": 65536 00:27:15.500 } 00:27:15.500 ] 00:27:15.500 }' 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.500 07:48:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:16.440 "name": "raid_bdev1", 00:27:16.440 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:16.440 "strip_size_kb": 64, 00:27:16.440 "state": "online", 00:27:16.440 "raid_level": "raid5f", 00:27:16.440 "superblock": false, 00:27:16.440 "num_base_bdevs": 3, 00:27:16.440 "num_base_bdevs_discovered": 3, 00:27:16.440 "num_base_bdevs_operational": 3, 00:27:16.440 "process": { 00:27:16.440 "type": "rebuild", 00:27:16.440 "target": "spare", 00:27:16.440 "progress": { 00:27:16.440 "blocks": 114688, 00:27:16.440 "percent": 87 00:27:16.440 } 00:27:16.440 }, 00:27:16.440 "base_bdevs_list": [ 00:27:16.440 { 00:27:16.440 "name": "spare", 00:27:16.440 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:16.440 "is_configured": true, 00:27:16.440 "data_offset": 0, 00:27:16.440 "data_size": 65536 00:27:16.440 }, 00:27:16.440 { 00:27:16.440 "name": "BaseBdev2", 00:27:16.440 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:16.440 "is_configured": true, 00:27:16.440 "data_offset": 0, 00:27:16.440 "data_size": 65536 00:27:16.440 }, 00:27:16.440 { 00:27:16.440 "name": "BaseBdev3", 00:27:16.440 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:16.440 "is_configured": true, 00:27:16.440 "data_offset": 0, 00:27:16.440 "data_size": 65536 00:27:16.440 } 00:27:16.440 ] 00:27:16.440 }' 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:16.440 07:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:17.009 [2024-10-07 07:48:16.532865] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:17.009 [2024-10-07 07:48:16.532979] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:17.009 [2024-10-07 07:48:16.533037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:17.579 "name": "raid_bdev1", 00:27:17.579 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:17.579 "strip_size_kb": 64, 00:27:17.579 "state": "online", 00:27:17.579 "raid_level": "raid5f", 00:27:17.579 "superblock": false, 00:27:17.579 "num_base_bdevs": 3, 00:27:17.579 "num_base_bdevs_discovered": 3, 00:27:17.579 "num_base_bdevs_operational": 3, 00:27:17.579 "base_bdevs_list": [ 00:27:17.579 { 00:27:17.579 "name": "spare", 00:27:17.579 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 }, 00:27:17.579 { 00:27:17.579 "name": "BaseBdev2", 00:27:17.579 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 }, 00:27:17.579 { 00:27:17.579 "name": "BaseBdev3", 00:27:17.579 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 } 00:27:17.579 ] 00:27:17.579 }' 00:27:17.579 07:48:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:17.579 "name": "raid_bdev1", 00:27:17.579 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:17.579 "strip_size_kb": 64, 00:27:17.579 "state": "online", 00:27:17.579 "raid_level": "raid5f", 00:27:17.579 "superblock": false, 00:27:17.579 "num_base_bdevs": 3, 00:27:17.579 "num_base_bdevs_discovered": 3, 00:27:17.579 "num_base_bdevs_operational": 3, 00:27:17.579 "base_bdevs_list": [ 00:27:17.579 { 00:27:17.579 "name": "spare", 00:27:17.579 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 }, 00:27:17.579 { 00:27:17.579 "name": "BaseBdev2", 00:27:17.579 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 }, 00:27:17.579 { 00:27:17.579 "name": "BaseBdev3", 00:27:17.579 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:17.579 "is_configured": true, 00:27:17.579 "data_offset": 0, 00:27:17.579 "data_size": 65536 00:27:17.579 } 00:27:17.579 ] 00:27:17.579 }' 00:27:17.579 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.839 "name": "raid_bdev1", 00:27:17.839 "uuid": "b808fa78-bb2b-44fe-9c64-6e14b922d554", 00:27:17.839 "strip_size_kb": 64, 00:27:17.839 "state": "online", 00:27:17.839 "raid_level": "raid5f", 00:27:17.839 "superblock": false, 00:27:17.839 "num_base_bdevs": 3, 00:27:17.839 "num_base_bdevs_discovered": 3, 00:27:17.839 "num_base_bdevs_operational": 3, 00:27:17.839 "base_bdevs_list": [ 00:27:17.839 { 00:27:17.839 "name": "spare", 00:27:17.839 "uuid": "d4f5daab-525b-5005-9604-f0117c07976a", 00:27:17.839 "is_configured": true, 00:27:17.839 "data_offset": 0, 00:27:17.839 "data_size": 65536 00:27:17.839 }, 00:27:17.839 { 00:27:17.839 "name": "BaseBdev2", 00:27:17.839 "uuid": "9f2be307-bcdd-5069-a4f9-3f7f2437dd2d", 00:27:17.839 "is_configured": true, 00:27:17.839 "data_offset": 0, 00:27:17.839 "data_size": 65536 00:27:17.839 }, 00:27:17.839 { 00:27:17.839 "name": "BaseBdev3", 00:27:17.839 "uuid": "b65fa88a-9c86-556f-915f-58ee696c0c4a", 00:27:17.839 "is_configured": true, 00:27:17.839 "data_offset": 0, 00:27:17.839 "data_size": 65536 00:27:17.839 } 00:27:17.839 ] 00:27:17.839 }' 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.839 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 [2024-10-07 07:48:17.738635] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.406 [2024-10-07 07:48:17.738669] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.406 [2024-10-07 07:48:17.738770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.406 [2024-10-07 07:48:17.738857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.406 [2024-10-07 07:48:17.738877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.406 07:48:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:18.665 /dev/nbd0 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.665 1+0 records in 00:27:18.665 1+0 records out 00:27:18.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300295 s, 13.6 MB/s 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.665 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:18.925 /dev/nbd1 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.925 1+0 records in 00:27:18.925 1+0 records out 00:27:18.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359218 s, 11.4 MB/s 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.925 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.183 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.443 07:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81822 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' -z 81822 ']' 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # kill -0 81822 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # uname 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 81822 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:19.702 killing process with pid 81822 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 81822' 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # kill 81822 00:27:19.702 Received shutdown signal, test time was about 60.000000 seconds 00:27:19.702 00:27:19.702 Latency(us) 00:27:19.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.702 =================================================================================================================== 00:27:19.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:19.702 [2024-10-07 07:48:19.189256] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.702 07:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@977 -- # wait 81822 00:27:20.270 [2024-10-07 07:48:19.605965] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:27:21.647 00:27:21.647 real 0m16.125s 00:27:21.647 user 0m19.957s 00:27:21.647 sys 0m2.260s 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:21.647 ************************************ 00:27:21.647 END TEST raid5f_rebuild_test 00:27:21.647 ************************************ 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 07:48:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:27:21.647 07:48:20 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:27:21.647 07:48:20 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:21.647 07:48:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 ************************************ 00:27:21.647 START TEST raid5f_rebuild_test_sb 00:27:21.647 ************************************ 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid5f 3 true false true 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82269 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82269 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # '[' -z 82269 ']' 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:21.647 07:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 [2024-10-07 07:48:21.095985] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:21.647 [2024-10-07 07:48:21.096167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82269 ] 00:27:21.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:21.647 Zero copy mechanism will not be used. 00:27:21.906 [2024-10-07 07:48:21.282582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.228 [2024-10-07 07:48:21.499583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.228 [2024-10-07 07:48:21.720293] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.228 [2024-10-07 07:48:21.720361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # return 0 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 BaseBdev1_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 [2024-10-07 07:48:22.134990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:22.798 [2024-10-07 07:48:22.135064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.798 [2024-10-07 07:48:22.135120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:22.798 [2024-10-07 07:48:22.135138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.798 [2024-10-07 07:48:22.137609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.798 [2024-10-07 07:48:22.137656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:22.798 BaseBdev1 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 BaseBdev2_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 [2024-10-07 07:48:22.207157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:22.798 [2024-10-07 07:48:22.207236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.798 [2024-10-07 07:48:22.207263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:22.798 [2024-10-07 07:48:22.207281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.798 [2024-10-07 07:48:22.210041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.798 [2024-10-07 07:48:22.210090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:22.798 BaseBdev2 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 BaseBdev3_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 [2024-10-07 07:48:22.260543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:22.798 [2024-10-07 07:48:22.260607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.798 [2024-10-07 07:48:22.260636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:22.798 [2024-10-07 07:48:22.260652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.798 [2024-10-07 07:48:22.263149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.798 [2024-10-07 07:48:22.263192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:22.798 BaseBdev3 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.798 spare_malloc 00:27:22.798 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.799 spare_delay 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.799 [2024-10-07 07:48:22.326999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:22.799 [2024-10-07 07:48:22.327078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.799 [2024-10-07 07:48:22.327105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:22.799 [2024-10-07 07:48:22.327121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.799 [2024-10-07 07:48:22.329806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.799 [2024-10-07 07:48:22.329870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:22.799 spare 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:22.799 [2024-10-07 07:48:22.339050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.799 [2024-10-07 07:48:22.341358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.799 [2024-10-07 07:48:22.341433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:22.799 [2024-10-07 07:48:22.341635] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:22.799 [2024-10-07 07:48:22.341647] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:22.799 [2024-10-07 07:48:22.341975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:22.799 [2024-10-07 07:48:22.348121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:22.799 [2024-10-07 07:48:22.348150] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:22.799 [2024-10-07 07:48:22.348344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:22.799 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.059 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:23.059 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.059 "name": "raid_bdev1", 00:27:23.059 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:23.059 "strip_size_kb": 64, 00:27:23.059 "state": "online", 00:27:23.059 "raid_level": "raid5f", 00:27:23.059 "superblock": true, 00:27:23.059 "num_base_bdevs": 3, 00:27:23.059 "num_base_bdevs_discovered": 3, 00:27:23.059 "num_base_bdevs_operational": 3, 00:27:23.059 "base_bdevs_list": [ 00:27:23.059 { 00:27:23.059 "name": "BaseBdev1", 00:27:23.059 "uuid": "b82baf4e-4923-5673-a63d-d5c10bf1c1f2", 00:27:23.059 "is_configured": true, 00:27:23.059 "data_offset": 2048, 00:27:23.059 "data_size": 63488 00:27:23.059 }, 00:27:23.059 { 00:27:23.059 "name": "BaseBdev2", 00:27:23.059 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:23.059 "is_configured": true, 00:27:23.059 "data_offset": 2048, 00:27:23.059 "data_size": 63488 00:27:23.059 }, 00:27:23.059 { 00:27:23.059 "name": "BaseBdev3", 00:27:23.059 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:23.059 "is_configured": true, 00:27:23.059 "data_offset": 2048, 00:27:23.059 "data_size": 63488 00:27:23.059 } 00:27:23.059 ] 00:27:23.059 }' 00:27:23.059 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.059 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:23.318 [2024-10-07 07:48:22.775314] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:23.318 07:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:23.886 [2024-10-07 07:48:23.151270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:23.886 /dev/nbd0 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:23.886 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:23.886 1+0 records in 00:27:23.886 1+0 records out 00:27:23.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253649 s, 16.1 MB/s 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:27:23.887 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:24.145 496+0 records in 00:27:24.145 496+0 records out 00:27:24.145 65011712 bytes (65 MB, 62 MiB) copied, 0.382454 s, 170 MB/s 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.145 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:24.404 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:24.405 [2024-10-07 07:48:23.912153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.405 [2024-10-07 07:48:23.923874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.405 "name": "raid_bdev1", 00:27:24.405 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:24.405 "strip_size_kb": 64, 00:27:24.405 "state": "online", 00:27:24.405 "raid_level": "raid5f", 00:27:24.405 "superblock": true, 00:27:24.405 "num_base_bdevs": 3, 00:27:24.405 "num_base_bdevs_discovered": 2, 00:27:24.405 "num_base_bdevs_operational": 2, 00:27:24.405 "base_bdevs_list": [ 00:27:24.405 { 00:27:24.405 "name": null, 00:27:24.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.405 "is_configured": false, 00:27:24.405 "data_offset": 0, 00:27:24.405 "data_size": 63488 00:27:24.405 }, 00:27:24.405 { 00:27:24.405 "name": "BaseBdev2", 00:27:24.405 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:24.405 "is_configured": true, 00:27:24.405 "data_offset": 2048, 00:27:24.405 "data_size": 63488 00:27:24.405 }, 00:27:24.405 { 00:27:24.405 "name": "BaseBdev3", 00:27:24.405 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:24.405 "is_configured": true, 00:27:24.405 "data_offset": 2048, 00:27:24.405 "data_size": 63488 00:27:24.405 } 00:27:24.405 ] 00:27:24.405 }' 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.405 07:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.973 07:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.973 07:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:24.973 07:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.973 [2024-10-07 07:48:24.368018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.973 [2024-10-07 07:48:24.386320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:27:24.973 07:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:24.973 07:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:24.973 [2024-10-07 07:48:24.395443] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.910 "name": "raid_bdev1", 00:27:25.910 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:25.910 "strip_size_kb": 64, 00:27:25.910 "state": "online", 00:27:25.910 "raid_level": "raid5f", 00:27:25.910 "superblock": true, 00:27:25.910 "num_base_bdevs": 3, 00:27:25.910 "num_base_bdevs_discovered": 3, 00:27:25.910 "num_base_bdevs_operational": 3, 00:27:25.910 "process": { 00:27:25.910 "type": "rebuild", 00:27:25.910 "target": "spare", 00:27:25.910 "progress": { 00:27:25.910 "blocks": 18432, 00:27:25.910 "percent": 14 00:27:25.910 } 00:27:25.910 }, 00:27:25.910 "base_bdevs_list": [ 00:27:25.910 { 00:27:25.910 "name": "spare", 00:27:25.910 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:25.910 "is_configured": true, 00:27:25.910 "data_offset": 2048, 00:27:25.910 "data_size": 63488 00:27:25.910 }, 00:27:25.910 { 00:27:25.910 "name": "BaseBdev2", 00:27:25.910 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:25.910 "is_configured": true, 00:27:25.910 "data_offset": 2048, 00:27:25.910 "data_size": 63488 00:27:25.910 }, 00:27:25.910 { 00:27:25.910 "name": "BaseBdev3", 00:27:25.910 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:25.910 "is_configured": true, 00:27:25.910 "data_offset": 2048, 00:27:25.910 "data_size": 63488 00:27:25.910 } 00:27:25.910 ] 00:27:25.910 }' 00:27:25.910 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.169 [2024-10-07 07:48:25.536858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.169 [2024-10-07 07:48:25.608300] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.169 [2024-10-07 07:48:25.608387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.169 [2024-10-07 07:48:25.608409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.169 [2024-10-07 07:48:25.608419] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.169 "name": "raid_bdev1", 00:27:26.169 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:26.169 "strip_size_kb": 64, 00:27:26.169 "state": "online", 00:27:26.169 "raid_level": "raid5f", 00:27:26.169 "superblock": true, 00:27:26.169 "num_base_bdevs": 3, 00:27:26.169 "num_base_bdevs_discovered": 2, 00:27:26.169 "num_base_bdevs_operational": 2, 00:27:26.169 "base_bdevs_list": [ 00:27:26.169 { 00:27:26.169 "name": null, 00:27:26.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.169 "is_configured": false, 00:27:26.169 "data_offset": 0, 00:27:26.169 "data_size": 63488 00:27:26.169 }, 00:27:26.169 { 00:27:26.169 "name": "BaseBdev2", 00:27:26.169 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:26.169 "is_configured": true, 00:27:26.169 "data_offset": 2048, 00:27:26.169 "data_size": 63488 00:27:26.169 }, 00:27:26.169 { 00:27:26.169 "name": "BaseBdev3", 00:27:26.169 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:26.169 "is_configured": true, 00:27:26.169 "data_offset": 2048, 00:27:26.169 "data_size": 63488 00:27:26.169 } 00:27:26.169 ] 00:27:26.169 }' 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.169 07:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:26.737 "name": "raid_bdev1", 00:27:26.737 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:26.737 "strip_size_kb": 64, 00:27:26.737 "state": "online", 00:27:26.737 "raid_level": "raid5f", 00:27:26.737 "superblock": true, 00:27:26.737 "num_base_bdevs": 3, 00:27:26.737 "num_base_bdevs_discovered": 2, 00:27:26.737 "num_base_bdevs_operational": 2, 00:27:26.737 "base_bdevs_list": [ 00:27:26.737 { 00:27:26.737 "name": null, 00:27:26.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.737 "is_configured": false, 00:27:26.737 "data_offset": 0, 00:27:26.737 "data_size": 63488 00:27:26.737 }, 00:27:26.737 { 00:27:26.737 "name": "BaseBdev2", 00:27:26.737 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:26.737 "is_configured": true, 00:27:26.737 "data_offset": 2048, 00:27:26.737 "data_size": 63488 00:27:26.737 }, 00:27:26.737 { 00:27:26.737 "name": "BaseBdev3", 00:27:26.737 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:26.737 "is_configured": true, 00:27:26.737 "data_offset": 2048, 00:27:26.737 "data_size": 63488 00:27:26.737 } 00:27:26.737 ] 00:27:26.737 }' 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.737 [2024-10-07 07:48:26.219241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:26.737 [2024-10-07 07:48:26.235570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:26.737 07:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:26.737 [2024-10-07 07:48:26.244238] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:28.116 "name": "raid_bdev1", 00:27:28.116 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:28.116 "strip_size_kb": 64, 00:27:28.116 "state": "online", 00:27:28.116 "raid_level": "raid5f", 00:27:28.116 "superblock": true, 00:27:28.116 "num_base_bdevs": 3, 00:27:28.116 "num_base_bdevs_discovered": 3, 00:27:28.116 "num_base_bdevs_operational": 3, 00:27:28.116 "process": { 00:27:28.116 "type": "rebuild", 00:27:28.116 "target": "spare", 00:27:28.116 "progress": { 00:27:28.116 "blocks": 20480, 00:27:28.116 "percent": 16 00:27:28.116 } 00:27:28.116 }, 00:27:28.116 "base_bdevs_list": [ 00:27:28.116 { 00:27:28.116 "name": "spare", 00:27:28.116 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 }, 00:27:28.116 { 00:27:28.116 "name": "BaseBdev2", 00:27:28.116 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 }, 00:27:28.116 { 00:27:28.116 "name": "BaseBdev3", 00:27:28.116 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 } 00:27:28.116 ] 00:27:28.116 }' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:28.116 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=597 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:28.116 "name": "raid_bdev1", 00:27:28.116 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:28.116 "strip_size_kb": 64, 00:27:28.116 "state": "online", 00:27:28.116 "raid_level": "raid5f", 00:27:28.116 "superblock": true, 00:27:28.116 "num_base_bdevs": 3, 00:27:28.116 "num_base_bdevs_discovered": 3, 00:27:28.116 "num_base_bdevs_operational": 3, 00:27:28.116 "process": { 00:27:28.116 "type": "rebuild", 00:27:28.116 "target": "spare", 00:27:28.116 "progress": { 00:27:28.116 "blocks": 22528, 00:27:28.116 "percent": 17 00:27:28.116 } 00:27:28.116 }, 00:27:28.116 "base_bdevs_list": [ 00:27:28.116 { 00:27:28.116 "name": "spare", 00:27:28.116 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 }, 00:27:28.116 { 00:27:28.116 "name": "BaseBdev2", 00:27:28.116 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 }, 00:27:28.116 { 00:27:28.116 "name": "BaseBdev3", 00:27:28.116 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:28.116 "is_configured": true, 00:27:28.116 "data_offset": 2048, 00:27:28.116 "data_size": 63488 00:27:28.116 } 00:27:28.116 ] 00:27:28.116 }' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.116 07:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:29.107 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:29.108 "name": "raid_bdev1", 00:27:29.108 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:29.108 "strip_size_kb": 64, 00:27:29.108 "state": "online", 00:27:29.108 "raid_level": "raid5f", 00:27:29.108 "superblock": true, 00:27:29.108 "num_base_bdevs": 3, 00:27:29.108 "num_base_bdevs_discovered": 3, 00:27:29.108 "num_base_bdevs_operational": 3, 00:27:29.108 "process": { 00:27:29.108 "type": "rebuild", 00:27:29.108 "target": "spare", 00:27:29.108 "progress": { 00:27:29.108 "blocks": 45056, 00:27:29.108 "percent": 35 00:27:29.108 } 00:27:29.108 }, 00:27:29.108 "base_bdevs_list": [ 00:27:29.108 { 00:27:29.108 "name": "spare", 00:27:29.108 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:29.108 "is_configured": true, 00:27:29.108 "data_offset": 2048, 00:27:29.108 "data_size": 63488 00:27:29.108 }, 00:27:29.108 { 00:27:29.108 "name": "BaseBdev2", 00:27:29.108 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:29.108 "is_configured": true, 00:27:29.108 "data_offset": 2048, 00:27:29.108 "data_size": 63488 00:27:29.108 }, 00:27:29.108 { 00:27:29.108 "name": "BaseBdev3", 00:27:29.108 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:29.108 "is_configured": true, 00:27:29.108 "data_offset": 2048, 00:27:29.108 "data_size": 63488 00:27:29.108 } 00:27:29.108 ] 00:27:29.108 }' 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.108 07:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:30.485 "name": "raid_bdev1", 00:27:30.485 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:30.485 "strip_size_kb": 64, 00:27:30.485 "state": "online", 00:27:30.485 "raid_level": "raid5f", 00:27:30.485 "superblock": true, 00:27:30.485 "num_base_bdevs": 3, 00:27:30.485 "num_base_bdevs_discovered": 3, 00:27:30.485 "num_base_bdevs_operational": 3, 00:27:30.485 "process": { 00:27:30.485 "type": "rebuild", 00:27:30.485 "target": "spare", 00:27:30.485 "progress": { 00:27:30.485 "blocks": 67584, 00:27:30.485 "percent": 53 00:27:30.485 } 00:27:30.485 }, 00:27:30.485 "base_bdevs_list": [ 00:27:30.485 { 00:27:30.485 "name": "spare", 00:27:30.485 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:30.485 "is_configured": true, 00:27:30.485 "data_offset": 2048, 00:27:30.485 "data_size": 63488 00:27:30.485 }, 00:27:30.485 { 00:27:30.485 "name": "BaseBdev2", 00:27:30.485 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:30.485 "is_configured": true, 00:27:30.485 "data_offset": 2048, 00:27:30.485 "data_size": 63488 00:27:30.485 }, 00:27:30.485 { 00:27:30.485 "name": "BaseBdev3", 00:27:30.485 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:30.485 "is_configured": true, 00:27:30.485 "data_offset": 2048, 00:27:30.485 "data_size": 63488 00:27:30.485 } 00:27:30.485 ] 00:27:30.485 }' 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:30.485 07:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:31.421 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:31.421 "name": "raid_bdev1", 00:27:31.421 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:31.421 "strip_size_kb": 64, 00:27:31.421 "state": "online", 00:27:31.421 "raid_level": "raid5f", 00:27:31.421 "superblock": true, 00:27:31.421 "num_base_bdevs": 3, 00:27:31.421 "num_base_bdevs_discovered": 3, 00:27:31.421 "num_base_bdevs_operational": 3, 00:27:31.421 "process": { 00:27:31.421 "type": "rebuild", 00:27:31.421 "target": "spare", 00:27:31.421 "progress": { 00:27:31.421 "blocks": 92160, 00:27:31.421 "percent": 72 00:27:31.421 } 00:27:31.421 }, 00:27:31.421 "base_bdevs_list": [ 00:27:31.421 { 00:27:31.421 "name": "spare", 00:27:31.421 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:31.421 "is_configured": true, 00:27:31.421 "data_offset": 2048, 00:27:31.421 "data_size": 63488 00:27:31.421 }, 00:27:31.421 { 00:27:31.421 "name": "BaseBdev2", 00:27:31.422 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:31.422 "is_configured": true, 00:27:31.422 "data_offset": 2048, 00:27:31.422 "data_size": 63488 00:27:31.422 }, 00:27:31.422 { 00:27:31.422 "name": "BaseBdev3", 00:27:31.422 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:31.422 "is_configured": true, 00:27:31.422 "data_offset": 2048, 00:27:31.422 "data_size": 63488 00:27:31.422 } 00:27:31.422 ] 00:27:31.422 }' 00:27:31.422 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:31.422 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.422 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:31.422 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.422 07:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:32.803 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:32.803 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:32.804 "name": "raid_bdev1", 00:27:32.804 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:32.804 "strip_size_kb": 64, 00:27:32.804 "state": "online", 00:27:32.804 "raid_level": "raid5f", 00:27:32.804 "superblock": true, 00:27:32.804 "num_base_bdevs": 3, 00:27:32.804 "num_base_bdevs_discovered": 3, 00:27:32.804 "num_base_bdevs_operational": 3, 00:27:32.804 "process": { 00:27:32.804 "type": "rebuild", 00:27:32.804 "target": "spare", 00:27:32.804 "progress": { 00:27:32.804 "blocks": 114688, 00:27:32.804 "percent": 90 00:27:32.804 } 00:27:32.804 }, 00:27:32.804 "base_bdevs_list": [ 00:27:32.804 { 00:27:32.804 "name": "spare", 00:27:32.804 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:32.804 "is_configured": true, 00:27:32.804 "data_offset": 2048, 00:27:32.804 "data_size": 63488 00:27:32.804 }, 00:27:32.804 { 00:27:32.804 "name": "BaseBdev2", 00:27:32.804 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:32.804 "is_configured": true, 00:27:32.804 "data_offset": 2048, 00:27:32.804 "data_size": 63488 00:27:32.804 }, 00:27:32.804 { 00:27:32.804 "name": "BaseBdev3", 00:27:32.804 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:32.804 "is_configured": true, 00:27:32.804 "data_offset": 2048, 00:27:32.804 "data_size": 63488 00:27:32.804 } 00:27:32.804 ] 00:27:32.804 }' 00:27:32.804 07:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:32.804 07:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:32.804 07:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:32.804 07:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:32.804 07:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:33.063 [2024-10-07 07:48:32.510757] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:33.063 [2024-10-07 07:48:32.510875] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:33.063 [2024-10-07 07:48:32.511019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.632 "name": "raid_bdev1", 00:27:33.632 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:33.632 "strip_size_kb": 64, 00:27:33.632 "state": "online", 00:27:33.632 "raid_level": "raid5f", 00:27:33.632 "superblock": true, 00:27:33.632 "num_base_bdevs": 3, 00:27:33.632 "num_base_bdevs_discovered": 3, 00:27:33.632 "num_base_bdevs_operational": 3, 00:27:33.632 "base_bdevs_list": [ 00:27:33.632 { 00:27:33.632 "name": "spare", 00:27:33.632 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:33.632 "is_configured": true, 00:27:33.632 "data_offset": 2048, 00:27:33.632 "data_size": 63488 00:27:33.632 }, 00:27:33.632 { 00:27:33.632 "name": "BaseBdev2", 00:27:33.632 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:33.632 "is_configured": true, 00:27:33.632 "data_offset": 2048, 00:27:33.632 "data_size": 63488 00:27:33.632 }, 00:27:33.632 { 00:27:33.632 "name": "BaseBdev3", 00:27:33.632 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:33.632 "is_configured": true, 00:27:33.632 "data_offset": 2048, 00:27:33.632 "data_size": 63488 00:27:33.632 } 00:27:33.632 ] 00:27:33.632 }' 00:27:33.632 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:33.892 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.892 "name": "raid_bdev1", 00:27:33.892 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:33.892 "strip_size_kb": 64, 00:27:33.892 "state": "online", 00:27:33.892 "raid_level": "raid5f", 00:27:33.892 "superblock": true, 00:27:33.892 "num_base_bdevs": 3, 00:27:33.892 "num_base_bdevs_discovered": 3, 00:27:33.892 "num_base_bdevs_operational": 3, 00:27:33.892 "base_bdevs_list": [ 00:27:33.892 { 00:27:33.892 "name": "spare", 00:27:33.892 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:33.892 "is_configured": true, 00:27:33.892 "data_offset": 2048, 00:27:33.892 "data_size": 63488 00:27:33.892 }, 00:27:33.892 { 00:27:33.892 "name": "BaseBdev2", 00:27:33.892 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:33.893 "is_configured": true, 00:27:33.893 "data_offset": 2048, 00:27:33.893 "data_size": 63488 00:27:33.893 }, 00:27:33.893 { 00:27:33.893 "name": "BaseBdev3", 00:27:33.893 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:33.893 "is_configured": true, 00:27:33.893 "data_offset": 2048, 00:27:33.893 "data_size": 63488 00:27:33.893 } 00:27:33.893 ] 00:27:33.893 }' 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.893 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.153 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:34.153 "name": "raid_bdev1", 00:27:34.153 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:34.153 "strip_size_kb": 64, 00:27:34.153 "state": "online", 00:27:34.153 "raid_level": "raid5f", 00:27:34.153 "superblock": true, 00:27:34.153 "num_base_bdevs": 3, 00:27:34.153 "num_base_bdevs_discovered": 3, 00:27:34.153 "num_base_bdevs_operational": 3, 00:27:34.153 "base_bdevs_list": [ 00:27:34.153 { 00:27:34.153 "name": "spare", 00:27:34.153 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:34.153 "is_configured": true, 00:27:34.153 "data_offset": 2048, 00:27:34.153 "data_size": 63488 00:27:34.153 }, 00:27:34.153 { 00:27:34.153 "name": "BaseBdev2", 00:27:34.153 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:34.153 "is_configured": true, 00:27:34.153 "data_offset": 2048, 00:27:34.153 "data_size": 63488 00:27:34.153 }, 00:27:34.153 { 00:27:34.153 "name": "BaseBdev3", 00:27:34.153 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:34.153 "is_configured": true, 00:27:34.153 "data_offset": 2048, 00:27:34.153 "data_size": 63488 00:27:34.153 } 00:27:34.153 ] 00:27:34.153 }' 00:27:34.153 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:34.153 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.411 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.412 [2024-10-07 07:48:33.901202] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.412 [2024-10-07 07:48:33.901242] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:34.412 [2024-10-07 07:48:33.901338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.412 [2024-10-07 07:48:33.901428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.412 [2024-10-07 07:48:33.901449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:34.412 07:48:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:34.670 /dev/nbd0 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:34.670 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:34.929 1+0 records in 00:27:34.929 1+0 records out 00:27:34.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318025 s, 12.9 MB/s 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:34.929 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:35.191 /dev/nbd1 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.191 1+0 records in 00:27:35.191 1+0 records out 00:27:35.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455194 s, 9.0 MB/s 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:35.191 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:35.455 07:48:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:35.715 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:35.976 [2024-10-07 07:48:35.380699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.976 [2024-10-07 07:48:35.380779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.976 [2024-10-07 07:48:35.380806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:35.976 [2024-10-07 07:48:35.380824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.976 [2024-10-07 07:48:35.384289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.976 [2024-10-07 07:48:35.384563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.976 [2024-10-07 07:48:35.384800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:35.976 [2024-10-07 07:48:35.384876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:35.976 [2024-10-07 07:48:35.385095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:35.976 [2024-10-07 07:48:35.385209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:35.976 spare 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:35.976 [2024-10-07 07:48:35.485308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:35.976 [2024-10-07 07:48:35.485361] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:35.976 [2024-10-07 07:48:35.485934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:27:35.976 [2024-10-07 07:48:35.491945] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:35.976 [2024-10-07 07:48:35.492077] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:35.976 [2024-10-07 07:48:35.492438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:35.976 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.235 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.235 "name": "raid_bdev1", 00:27:36.235 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:36.235 "strip_size_kb": 64, 00:27:36.235 "state": "online", 00:27:36.235 "raid_level": "raid5f", 00:27:36.235 "superblock": true, 00:27:36.235 "num_base_bdevs": 3, 00:27:36.235 "num_base_bdevs_discovered": 3, 00:27:36.235 "num_base_bdevs_operational": 3, 00:27:36.235 "base_bdevs_list": [ 00:27:36.235 { 00:27:36.235 "name": "spare", 00:27:36.235 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:36.235 "is_configured": true, 00:27:36.235 "data_offset": 2048, 00:27:36.235 "data_size": 63488 00:27:36.235 }, 00:27:36.235 { 00:27:36.235 "name": "BaseBdev2", 00:27:36.235 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:36.235 "is_configured": true, 00:27:36.235 "data_offset": 2048, 00:27:36.235 "data_size": 63488 00:27:36.235 }, 00:27:36.235 { 00:27:36.235 "name": "BaseBdev3", 00:27:36.235 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:36.235 "is_configured": true, 00:27:36.235 "data_offset": 2048, 00:27:36.235 "data_size": 63488 00:27:36.235 } 00:27:36.235 ] 00:27:36.235 }' 00:27:36.235 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.235 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.495 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:36.495 "name": "raid_bdev1", 00:27:36.495 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:36.495 "strip_size_kb": 64, 00:27:36.495 "state": "online", 00:27:36.495 "raid_level": "raid5f", 00:27:36.495 "superblock": true, 00:27:36.495 "num_base_bdevs": 3, 00:27:36.495 "num_base_bdevs_discovered": 3, 00:27:36.495 "num_base_bdevs_operational": 3, 00:27:36.495 "base_bdevs_list": [ 00:27:36.495 { 00:27:36.495 "name": "spare", 00:27:36.495 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:36.495 "is_configured": true, 00:27:36.496 "data_offset": 2048, 00:27:36.496 "data_size": 63488 00:27:36.496 }, 00:27:36.496 { 00:27:36.496 "name": "BaseBdev2", 00:27:36.496 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:36.496 "is_configured": true, 00:27:36.496 "data_offset": 2048, 00:27:36.496 "data_size": 63488 00:27:36.496 }, 00:27:36.496 { 00:27:36.496 "name": "BaseBdev3", 00:27:36.496 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:36.496 "is_configured": true, 00:27:36.496 "data_offset": 2048, 00:27:36.496 "data_size": 63488 00:27:36.496 } 00:27:36.496 ] 00:27:36.496 }' 00:27:36.496 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:36.496 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:36.496 07:48:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:36.496 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.756 [2024-10-07 07:48:36.071262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.756 "name": "raid_bdev1", 00:27:36.756 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:36.756 "strip_size_kb": 64, 00:27:36.756 "state": "online", 00:27:36.756 "raid_level": "raid5f", 00:27:36.756 "superblock": true, 00:27:36.756 "num_base_bdevs": 3, 00:27:36.756 "num_base_bdevs_discovered": 2, 00:27:36.756 "num_base_bdevs_operational": 2, 00:27:36.756 "base_bdevs_list": [ 00:27:36.756 { 00:27:36.756 "name": null, 00:27:36.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.756 "is_configured": false, 00:27:36.756 "data_offset": 0, 00:27:36.756 "data_size": 63488 00:27:36.756 }, 00:27:36.756 { 00:27:36.756 "name": "BaseBdev2", 00:27:36.756 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:36.756 "is_configured": true, 00:27:36.756 "data_offset": 2048, 00:27:36.756 "data_size": 63488 00:27:36.756 }, 00:27:36.756 { 00:27:36.756 "name": "BaseBdev3", 00:27:36.756 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:36.756 "is_configured": true, 00:27:36.756 "data_offset": 2048, 00:27:36.756 "data_size": 63488 00:27:36.756 } 00:27:36.756 ] 00:27:36.756 }' 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.756 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.015 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:37.015 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:37.015 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.015 [2024-10-07 07:48:36.535428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.015 [2024-10-07 07:48:36.535641] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:37.015 [2024-10-07 07:48:36.535664] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:37.015 [2024-10-07 07:48:36.536152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.015 [2024-10-07 07:48:36.553481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:27:37.015 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:37.015 07:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:37.015 [2024-10-07 07:48:36.562283] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:38.396 "name": "raid_bdev1", 00:27:38.396 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:38.396 "strip_size_kb": 64, 00:27:38.396 "state": "online", 00:27:38.396 "raid_level": "raid5f", 00:27:38.396 "superblock": true, 00:27:38.396 "num_base_bdevs": 3, 00:27:38.396 "num_base_bdevs_discovered": 3, 00:27:38.396 "num_base_bdevs_operational": 3, 00:27:38.396 "process": { 00:27:38.396 "type": "rebuild", 00:27:38.396 "target": "spare", 00:27:38.396 "progress": { 00:27:38.396 "blocks": 18432, 00:27:38.396 "percent": 14 00:27:38.396 } 00:27:38.396 }, 00:27:38.396 "base_bdevs_list": [ 00:27:38.396 { 00:27:38.396 "name": "spare", 00:27:38.396 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:38.396 "is_configured": true, 00:27:38.396 "data_offset": 2048, 00:27:38.396 "data_size": 63488 00:27:38.396 }, 00:27:38.396 { 00:27:38.396 "name": "BaseBdev2", 00:27:38.396 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:38.396 "is_configured": true, 00:27:38.396 "data_offset": 2048, 00:27:38.396 "data_size": 63488 00:27:38.396 }, 00:27:38.396 { 00:27:38.396 "name": "BaseBdev3", 00:27:38.396 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:38.396 "is_configured": true, 00:27:38.396 "data_offset": 2048, 00:27:38.396 "data_size": 63488 00:27:38.396 } 00:27:38.396 ] 00:27:38.396 }' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.396 [2024-10-07 07:48:37.712291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.396 [2024-10-07 07:48:37.774503] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:38.396 [2024-10-07 07:48:37.774587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.396 [2024-10-07 07:48:37.774605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.396 [2024-10-07 07:48:37.774617] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.396 "name": "raid_bdev1", 00:27:38.396 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:38.396 "strip_size_kb": 64, 00:27:38.396 "state": "online", 00:27:38.396 "raid_level": "raid5f", 00:27:38.396 "superblock": true, 00:27:38.396 "num_base_bdevs": 3, 00:27:38.396 "num_base_bdevs_discovered": 2, 00:27:38.396 "num_base_bdevs_operational": 2, 00:27:38.396 "base_bdevs_list": [ 00:27:38.396 { 00:27:38.396 "name": null, 00:27:38.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.396 "is_configured": false, 00:27:38.396 "data_offset": 0, 00:27:38.396 "data_size": 63488 00:27:38.396 }, 00:27:38.396 { 00:27:38.396 "name": "BaseBdev2", 00:27:38.396 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:38.396 "is_configured": true, 00:27:38.396 "data_offset": 2048, 00:27:38.396 "data_size": 63488 00:27:38.396 }, 00:27:38.396 { 00:27:38.396 "name": "BaseBdev3", 00:27:38.396 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:38.396 "is_configured": true, 00:27:38.396 "data_offset": 2048, 00:27:38.396 "data_size": 63488 00:27:38.396 } 00:27:38.396 ] 00:27:38.396 }' 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.396 07:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.964 07:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:38.964 07:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:38.964 07:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.964 [2024-10-07 07:48:38.253062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:38.964 [2024-10-07 07:48:38.253134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.964 [2024-10-07 07:48:38.253161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:38.964 [2024-10-07 07:48:38.253181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.964 [2024-10-07 07:48:38.253688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.964 [2024-10-07 07:48:38.253792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:38.964 [2024-10-07 07:48:38.253909] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:38.964 [2024-10-07 07:48:38.253929] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:38.964 [2024-10-07 07:48:38.253942] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:38.964 [2024-10-07 07:48:38.253970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:38.964 [2024-10-07 07:48:38.270509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:27:38.964 spare 00:27:38.964 07:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:38.964 07:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:38.964 [2024-10-07 07:48:38.279820] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:39.900 "name": "raid_bdev1", 00:27:39.900 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:39.900 "strip_size_kb": 64, 00:27:39.900 "state": "online", 00:27:39.900 "raid_level": "raid5f", 00:27:39.900 "superblock": true, 00:27:39.900 "num_base_bdevs": 3, 00:27:39.900 "num_base_bdevs_discovered": 3, 00:27:39.900 "num_base_bdevs_operational": 3, 00:27:39.900 "process": { 00:27:39.900 "type": "rebuild", 00:27:39.900 "target": "spare", 00:27:39.900 "progress": { 00:27:39.900 "blocks": 18432, 00:27:39.900 "percent": 14 00:27:39.900 } 00:27:39.900 }, 00:27:39.900 "base_bdevs_list": [ 00:27:39.900 { 00:27:39.900 "name": "spare", 00:27:39.900 "uuid": "d8456986-a684-5006-85a1-9541989f7870", 00:27:39.900 "is_configured": true, 00:27:39.900 "data_offset": 2048, 00:27:39.900 "data_size": 63488 00:27:39.900 }, 00:27:39.900 { 00:27:39.900 "name": "BaseBdev2", 00:27:39.900 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:39.900 "is_configured": true, 00:27:39.900 "data_offset": 2048, 00:27:39.900 "data_size": 63488 00:27:39.900 }, 00:27:39.900 { 00:27:39.900 "name": "BaseBdev3", 00:27:39.900 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:39.900 "is_configured": true, 00:27:39.900 "data_offset": 2048, 00:27:39.900 "data_size": 63488 00:27:39.900 } 00:27:39.900 ] 00:27:39.900 }' 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:39.900 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.900 [2024-10-07 07:48:39.409454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.159 [2024-10-07 07:48:39.491836] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:40.159 [2024-10-07 07:48:39.491908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.159 [2024-10-07 07:48:39.491929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.159 [2024-10-07 07:48:39.491938] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:40.159 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.159 "name": "raid_bdev1", 00:27:40.159 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:40.159 "strip_size_kb": 64, 00:27:40.159 "state": "online", 00:27:40.159 "raid_level": "raid5f", 00:27:40.159 "superblock": true, 00:27:40.159 "num_base_bdevs": 3, 00:27:40.159 "num_base_bdevs_discovered": 2, 00:27:40.159 "num_base_bdevs_operational": 2, 00:27:40.159 "base_bdevs_list": [ 00:27:40.160 { 00:27:40.160 "name": null, 00:27:40.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.160 "is_configured": false, 00:27:40.160 "data_offset": 0, 00:27:40.160 "data_size": 63488 00:27:40.160 }, 00:27:40.160 { 00:27:40.160 "name": "BaseBdev2", 00:27:40.160 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:40.160 "is_configured": true, 00:27:40.160 "data_offset": 2048, 00:27:40.160 "data_size": 63488 00:27:40.160 }, 00:27:40.160 { 00:27:40.160 "name": "BaseBdev3", 00:27:40.160 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:40.160 "is_configured": true, 00:27:40.160 "data_offset": 2048, 00:27:40.160 "data_size": 63488 00:27:40.160 } 00:27:40.160 ] 00:27:40.160 }' 00:27:40.160 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.160 07:48:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:40.728 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:40.728 "name": "raid_bdev1", 00:27:40.728 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:40.728 "strip_size_kb": 64, 00:27:40.728 "state": "online", 00:27:40.728 "raid_level": "raid5f", 00:27:40.728 "superblock": true, 00:27:40.728 "num_base_bdevs": 3, 00:27:40.728 "num_base_bdevs_discovered": 2, 00:27:40.728 "num_base_bdevs_operational": 2, 00:27:40.728 "base_bdevs_list": [ 00:27:40.728 { 00:27:40.728 "name": null, 00:27:40.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.728 "is_configured": false, 00:27:40.728 "data_offset": 0, 00:27:40.728 "data_size": 63488 00:27:40.728 }, 00:27:40.728 { 00:27:40.728 "name": "BaseBdev2", 00:27:40.728 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:40.728 "is_configured": true, 00:27:40.729 "data_offset": 2048, 00:27:40.729 "data_size": 63488 00:27:40.729 }, 00:27:40.729 { 00:27:40.729 "name": "BaseBdev3", 00:27:40.729 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:40.729 "is_configured": true, 00:27:40.729 "data_offset": 2048, 00:27:40.729 "data_size": 63488 00:27:40.729 } 00:27:40.729 ] 00:27:40.729 }' 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.729 [2024-10-07 07:48:40.160829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:40.729 [2024-10-07 07:48:40.160888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.729 [2024-10-07 07:48:40.160920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:40.729 [2024-10-07 07:48:40.160933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.729 [2024-10-07 07:48:40.161434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.729 [2024-10-07 07:48:40.161464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:40.729 [2024-10-07 07:48:40.161559] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:40.729 [2024-10-07 07:48:40.161579] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:40.729 [2024-10-07 07:48:40.161600] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:40.729 [2024-10-07 07:48:40.161613] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:40.729 BaseBdev1 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:40.729 07:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.669 "name": "raid_bdev1", 00:27:41.669 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:41.669 "strip_size_kb": 64, 00:27:41.669 "state": "online", 00:27:41.669 "raid_level": "raid5f", 00:27:41.669 "superblock": true, 00:27:41.669 "num_base_bdevs": 3, 00:27:41.669 "num_base_bdevs_discovered": 2, 00:27:41.669 "num_base_bdevs_operational": 2, 00:27:41.669 "base_bdevs_list": [ 00:27:41.669 { 00:27:41.669 "name": null, 00:27:41.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.669 "is_configured": false, 00:27:41.669 "data_offset": 0, 00:27:41.669 "data_size": 63488 00:27:41.669 }, 00:27:41.669 { 00:27:41.669 "name": "BaseBdev2", 00:27:41.669 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:41.669 "is_configured": true, 00:27:41.669 "data_offset": 2048, 00:27:41.669 "data_size": 63488 00:27:41.669 }, 00:27:41.669 { 00:27:41.669 "name": "BaseBdev3", 00:27:41.669 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:41.669 "is_configured": true, 00:27:41.669 "data_offset": 2048, 00:27:41.669 "data_size": 63488 00:27:41.669 } 00:27:41.669 ] 00:27:41.669 }' 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.669 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:42.238 "name": "raid_bdev1", 00:27:42.238 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:42.238 "strip_size_kb": 64, 00:27:42.238 "state": "online", 00:27:42.238 "raid_level": "raid5f", 00:27:42.238 "superblock": true, 00:27:42.238 "num_base_bdevs": 3, 00:27:42.238 "num_base_bdevs_discovered": 2, 00:27:42.238 "num_base_bdevs_operational": 2, 00:27:42.238 "base_bdevs_list": [ 00:27:42.238 { 00:27:42.238 "name": null, 00:27:42.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.238 "is_configured": false, 00:27:42.238 "data_offset": 0, 00:27:42.238 "data_size": 63488 00:27:42.238 }, 00:27:42.238 { 00:27:42.238 "name": "BaseBdev2", 00:27:42.238 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:42.238 "is_configured": true, 00:27:42.238 "data_offset": 2048, 00:27:42.238 "data_size": 63488 00:27:42.238 }, 00:27:42.238 { 00:27:42.238 "name": "BaseBdev3", 00:27:42.238 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:42.238 "is_configured": true, 00:27:42.238 "data_offset": 2048, 00:27:42.238 "data_size": 63488 00:27:42.238 } 00:27:42.238 ] 00:27:42.238 }' 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # local es=0 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:42.238 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.238 [2024-10-07 07:48:41.773408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:42.238 [2024-10-07 07:48:41.773596] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:42.239 [2024-10-07 07:48:41.773615] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:42.239 request: 00:27:42.239 { 00:27:42.239 "base_bdev": "BaseBdev1", 00:27:42.239 "raid_bdev": "raid_bdev1", 00:27:42.239 "method": "bdev_raid_add_base_bdev", 00:27:42.239 "req_id": 1 00:27:42.239 } 00:27:42.239 Got JSON-RPC error response 00:27:42.239 response: 00:27:42.239 { 00:27:42.239 "code": -22, 00:27:42.239 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:42.239 } 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@656 -- # es=1 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:27:42.239 07:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.618 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.619 "name": "raid_bdev1", 00:27:43.619 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:43.619 "strip_size_kb": 64, 00:27:43.619 "state": "online", 00:27:43.619 "raid_level": "raid5f", 00:27:43.619 "superblock": true, 00:27:43.619 "num_base_bdevs": 3, 00:27:43.619 "num_base_bdevs_discovered": 2, 00:27:43.619 "num_base_bdevs_operational": 2, 00:27:43.619 "base_bdevs_list": [ 00:27:43.619 { 00:27:43.619 "name": null, 00:27:43.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.619 "is_configured": false, 00:27:43.619 "data_offset": 0, 00:27:43.619 "data_size": 63488 00:27:43.619 }, 00:27:43.619 { 00:27:43.619 "name": "BaseBdev2", 00:27:43.619 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:43.619 "is_configured": true, 00:27:43.619 "data_offset": 2048, 00:27:43.619 "data_size": 63488 00:27:43.619 }, 00:27:43.619 { 00:27:43.619 "name": "BaseBdev3", 00:27:43.619 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:43.619 "is_configured": true, 00:27:43.619 "data_offset": 2048, 00:27:43.619 "data_size": 63488 00:27:43.619 } 00:27:43.619 ] 00:27:43.619 }' 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.619 07:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:43.878 "name": "raid_bdev1", 00:27:43.878 "uuid": "68bd14c4-e056-430c-9ace-9535129f4d8f", 00:27:43.878 "strip_size_kb": 64, 00:27:43.878 "state": "online", 00:27:43.878 "raid_level": "raid5f", 00:27:43.878 "superblock": true, 00:27:43.878 "num_base_bdevs": 3, 00:27:43.878 "num_base_bdevs_discovered": 2, 00:27:43.878 "num_base_bdevs_operational": 2, 00:27:43.878 "base_bdevs_list": [ 00:27:43.878 { 00:27:43.878 "name": null, 00:27:43.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.878 "is_configured": false, 00:27:43.878 "data_offset": 0, 00:27:43.878 "data_size": 63488 00:27:43.878 }, 00:27:43.878 { 00:27:43.878 "name": "BaseBdev2", 00:27:43.878 "uuid": "02a258cd-c1b0-568e-bb2b-4bf704a4f21d", 00:27:43.878 "is_configured": true, 00:27:43.878 "data_offset": 2048, 00:27:43.878 "data_size": 63488 00:27:43.878 }, 00:27:43.878 { 00:27:43.878 "name": "BaseBdev3", 00:27:43.878 "uuid": "ed0a00dc-51f0-5ded-9d5d-26ee54cbd2bd", 00:27:43.878 "is_configured": true, 00:27:43.878 "data_offset": 2048, 00:27:43.878 "data_size": 63488 00:27:43.878 } 00:27:43.878 ] 00:27:43.878 }' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82269 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' -z 82269 ']' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # kill -0 82269 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # uname 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:43.878 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 82269 00:27:43.879 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:43.879 killing process with pid 82269 00:27:43.879 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:43.879 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 82269' 00:27:43.879 Received shutdown signal, test time was about 60.000000 seconds 00:27:43.879 00:27:43.879 Latency(us) 00:27:43.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.879 =================================================================================================================== 00:27:43.879 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:43.879 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # kill 82269 00:27:43.879 [2024-10-07 07:48:43.375523] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:43.879 07:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@977 -- # wait 82269 00:27:43.879 [2024-10-07 07:48:43.375657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.879 [2024-10-07 07:48:43.375755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.879 [2024-10-07 07:48:43.375779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:44.448 [2024-10-07 07:48:43.792164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:45.829 07:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:45.829 00:27:45.829 real 0m24.143s 00:27:45.829 user 0m31.074s 00:27:45.829 sys 0m3.015s 00:27:45.829 07:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:45.829 07:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.829 ************************************ 00:27:45.829 END TEST raid5f_rebuild_test_sb 00:27:45.829 ************************************ 00:27:45.829 07:48:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:27:45.829 07:48:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:27:45.829 07:48:45 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:27:45.829 07:48:45 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:45.829 07:48:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:45.829 ************************************ 00:27:45.829 START TEST raid5f_state_function_test 00:27:45.829 ************************************ 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # raid_state_function_test raid5f 4 false 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83022 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83022' 00:27:45.829 Process raid pid: 83022 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83022 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # '[' -z 83022 ']' 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:45.829 07:48:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.829 [2024-10-07 07:48:45.308146] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:45.829 [2024-10-07 07:48:45.308545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.089 [2024-10-07 07:48:45.489282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.348 [2024-10-07 07:48:45.708768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.608 [2024-10-07 07:48:45.954087] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:46.608 [2024-10-07 07:48:45.954176] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:46.608 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:46.608 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@867 -- # return 0 00:27:46.608 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:46.608 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:46.608 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.867 [2024-10-07 07:48:46.172410] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:46.867 [2024-10-07 07:48:46.172478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:46.867 [2024-10-07 07:48:46.172512] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:46.867 [2024-10-07 07:48:46.172530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:46.867 [2024-10-07 07:48:46.172539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:46.867 [2024-10-07 07:48:46.172552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:46.867 [2024-10-07 07:48:46.172561] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:46.867 [2024-10-07 07:48:46.172574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:46.867 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.867 "name": "Existed_Raid", 00:27:46.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.867 "strip_size_kb": 64, 00:27:46.867 "state": "configuring", 00:27:46.867 "raid_level": "raid5f", 00:27:46.867 "superblock": false, 00:27:46.867 "num_base_bdevs": 4, 00:27:46.867 "num_base_bdevs_discovered": 0, 00:27:46.867 "num_base_bdevs_operational": 4, 00:27:46.868 "base_bdevs_list": [ 00:27:46.868 { 00:27:46.868 "name": "BaseBdev1", 00:27:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.868 "is_configured": false, 00:27:46.868 "data_offset": 0, 00:27:46.868 "data_size": 0 00:27:46.868 }, 00:27:46.868 { 00:27:46.868 "name": "BaseBdev2", 00:27:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.868 "is_configured": false, 00:27:46.868 "data_offset": 0, 00:27:46.868 "data_size": 0 00:27:46.868 }, 00:27:46.868 { 00:27:46.868 "name": "BaseBdev3", 00:27:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.868 "is_configured": false, 00:27:46.868 "data_offset": 0, 00:27:46.868 "data_size": 0 00:27:46.868 }, 00:27:46.868 { 00:27:46.868 "name": "BaseBdev4", 00:27:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.868 "is_configured": false, 00:27:46.868 "data_offset": 0, 00:27:46.868 "data_size": 0 00:27:46.868 } 00:27:46.868 ] 00:27:46.868 }' 00:27:46.868 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.868 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 [2024-10-07 07:48:46.592412] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:47.127 [2024-10-07 07:48:46.592598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 [2024-10-07 07:48:46.604516] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:47.127 [2024-10-07 07:48:46.604741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:47.127 [2024-10-07 07:48:46.604906] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:47.127 [2024-10-07 07:48:46.605092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:47.127 [2024-10-07 07:48:46.605230] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:47.127 [2024-10-07 07:48:46.605379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:47.127 [2024-10-07 07:48:46.605513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:47.127 [2024-10-07 07:48:46.605686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 [2024-10-07 07:48:46.662138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:47.127 BaseBdev1 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.127 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.386 [ 00:27:47.386 { 00:27:47.387 "name": "BaseBdev1", 00:27:47.387 "aliases": [ 00:27:47.387 "63a3aa43-10a6-4187-bf64-4df9d07f99cb" 00:27:47.387 ], 00:27:47.387 "product_name": "Malloc disk", 00:27:47.387 "block_size": 512, 00:27:47.387 "num_blocks": 65536, 00:27:47.387 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:47.387 "assigned_rate_limits": { 00:27:47.387 "rw_ios_per_sec": 0, 00:27:47.387 "rw_mbytes_per_sec": 0, 00:27:47.387 "r_mbytes_per_sec": 0, 00:27:47.387 "w_mbytes_per_sec": 0 00:27:47.387 }, 00:27:47.387 "claimed": true, 00:27:47.387 "claim_type": "exclusive_write", 00:27:47.387 "zoned": false, 00:27:47.387 "supported_io_types": { 00:27:47.387 "read": true, 00:27:47.387 "write": true, 00:27:47.387 "unmap": true, 00:27:47.387 "flush": true, 00:27:47.387 "reset": true, 00:27:47.387 "nvme_admin": false, 00:27:47.387 "nvme_io": false, 00:27:47.387 "nvme_io_md": false, 00:27:47.387 "write_zeroes": true, 00:27:47.387 "zcopy": true, 00:27:47.387 "get_zone_info": false, 00:27:47.387 "zone_management": false, 00:27:47.387 "zone_append": false, 00:27:47.387 "compare": false, 00:27:47.387 "compare_and_write": false, 00:27:47.387 "abort": true, 00:27:47.387 "seek_hole": false, 00:27:47.387 "seek_data": false, 00:27:47.387 "copy": true, 00:27:47.387 "nvme_iov_md": false 00:27:47.387 }, 00:27:47.387 "memory_domains": [ 00:27:47.387 { 00:27:47.387 "dma_device_id": "system", 00:27:47.387 "dma_device_type": 1 00:27:47.387 }, 00:27:47.387 { 00:27:47.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.387 "dma_device_type": 2 00:27:47.387 } 00:27:47.387 ], 00:27:47.387 "driver_specific": {} 00:27:47.387 } 00:27:47.387 ] 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:47.387 "name": "Existed_Raid", 00:27:47.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.387 "strip_size_kb": 64, 00:27:47.387 "state": "configuring", 00:27:47.387 "raid_level": "raid5f", 00:27:47.387 "superblock": false, 00:27:47.387 "num_base_bdevs": 4, 00:27:47.387 "num_base_bdevs_discovered": 1, 00:27:47.387 "num_base_bdevs_operational": 4, 00:27:47.387 "base_bdevs_list": [ 00:27:47.387 { 00:27:47.387 "name": "BaseBdev1", 00:27:47.387 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:47.387 "is_configured": true, 00:27:47.387 "data_offset": 0, 00:27:47.387 "data_size": 65536 00:27:47.387 }, 00:27:47.387 { 00:27:47.387 "name": "BaseBdev2", 00:27:47.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.387 "is_configured": false, 00:27:47.387 "data_offset": 0, 00:27:47.387 "data_size": 0 00:27:47.387 }, 00:27:47.387 { 00:27:47.387 "name": "BaseBdev3", 00:27:47.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.387 "is_configured": false, 00:27:47.387 "data_offset": 0, 00:27:47.387 "data_size": 0 00:27:47.387 }, 00:27:47.387 { 00:27:47.387 "name": "BaseBdev4", 00:27:47.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.387 "is_configured": false, 00:27:47.387 "data_offset": 0, 00:27:47.387 "data_size": 0 00:27:47.387 } 00:27:47.387 ] 00:27:47.387 }' 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:47.387 07:48:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.646 [2024-10-07 07:48:47.086285] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:47.646 [2024-10-07 07:48:47.086492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.646 [2024-10-07 07:48:47.094329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:47.646 [2024-10-07 07:48:47.096472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:47.646 [2024-10-07 07:48:47.096524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:47.646 [2024-10-07 07:48:47.096537] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:47.646 [2024-10-07 07:48:47.096554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:47.646 [2024-10-07 07:48:47.096563] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:47.646 [2024-10-07 07:48:47.096576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:47.646 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:47.646 "name": "Existed_Raid", 00:27:47.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.646 "strip_size_kb": 64, 00:27:47.646 "state": "configuring", 00:27:47.646 "raid_level": "raid5f", 00:27:47.646 "superblock": false, 00:27:47.646 "num_base_bdevs": 4, 00:27:47.646 "num_base_bdevs_discovered": 1, 00:27:47.646 "num_base_bdevs_operational": 4, 00:27:47.646 "base_bdevs_list": [ 00:27:47.646 { 00:27:47.646 "name": "BaseBdev1", 00:27:47.646 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:47.646 "is_configured": true, 00:27:47.646 "data_offset": 0, 00:27:47.646 "data_size": 65536 00:27:47.646 }, 00:27:47.646 { 00:27:47.646 "name": "BaseBdev2", 00:27:47.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.646 "is_configured": false, 00:27:47.646 "data_offset": 0, 00:27:47.647 "data_size": 0 00:27:47.647 }, 00:27:47.647 { 00:27:47.647 "name": "BaseBdev3", 00:27:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.647 "is_configured": false, 00:27:47.647 "data_offset": 0, 00:27:47.647 "data_size": 0 00:27:47.647 }, 00:27:47.647 { 00:27:47.647 "name": "BaseBdev4", 00:27:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.647 "is_configured": false, 00:27:47.647 "data_offset": 0, 00:27:47.647 "data_size": 0 00:27:47.647 } 00:27:47.647 ] 00:27:47.647 }' 00:27:47.647 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:47.647 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.216 [2024-10-07 07:48:47.554998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:48.216 BaseBdev2 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.216 [ 00:27:48.216 { 00:27:48.216 "name": "BaseBdev2", 00:27:48.216 "aliases": [ 00:27:48.216 "3ba8de8b-c807-4524-864f-8f7da81fa673" 00:27:48.216 ], 00:27:48.216 "product_name": "Malloc disk", 00:27:48.216 "block_size": 512, 00:27:48.216 "num_blocks": 65536, 00:27:48.216 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:48.216 "assigned_rate_limits": { 00:27:48.216 "rw_ios_per_sec": 0, 00:27:48.216 "rw_mbytes_per_sec": 0, 00:27:48.216 "r_mbytes_per_sec": 0, 00:27:48.216 "w_mbytes_per_sec": 0 00:27:48.216 }, 00:27:48.216 "claimed": true, 00:27:48.216 "claim_type": "exclusive_write", 00:27:48.216 "zoned": false, 00:27:48.216 "supported_io_types": { 00:27:48.216 "read": true, 00:27:48.216 "write": true, 00:27:48.216 "unmap": true, 00:27:48.216 "flush": true, 00:27:48.216 "reset": true, 00:27:48.216 "nvme_admin": false, 00:27:48.216 "nvme_io": false, 00:27:48.216 "nvme_io_md": false, 00:27:48.216 "write_zeroes": true, 00:27:48.216 "zcopy": true, 00:27:48.216 "get_zone_info": false, 00:27:48.216 "zone_management": false, 00:27:48.216 "zone_append": false, 00:27:48.216 "compare": false, 00:27:48.216 "compare_and_write": false, 00:27:48.216 "abort": true, 00:27:48.216 "seek_hole": false, 00:27:48.216 "seek_data": false, 00:27:48.216 "copy": true, 00:27:48.216 "nvme_iov_md": false 00:27:48.216 }, 00:27:48.216 "memory_domains": [ 00:27:48.216 { 00:27:48.216 "dma_device_id": "system", 00:27:48.216 "dma_device_type": 1 00:27:48.216 }, 00:27:48.216 { 00:27:48.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.216 "dma_device_type": 2 00:27:48.216 } 00:27:48.216 ], 00:27:48.216 "driver_specific": {} 00:27:48.216 } 00:27:48.216 ] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.216 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.216 "name": "Existed_Raid", 00:27:48.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.216 "strip_size_kb": 64, 00:27:48.216 "state": "configuring", 00:27:48.216 "raid_level": "raid5f", 00:27:48.216 "superblock": false, 00:27:48.216 "num_base_bdevs": 4, 00:27:48.216 "num_base_bdevs_discovered": 2, 00:27:48.216 "num_base_bdevs_operational": 4, 00:27:48.216 "base_bdevs_list": [ 00:27:48.217 { 00:27:48.217 "name": "BaseBdev1", 00:27:48.217 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:48.217 "is_configured": true, 00:27:48.217 "data_offset": 0, 00:27:48.217 "data_size": 65536 00:27:48.217 }, 00:27:48.217 { 00:27:48.217 "name": "BaseBdev2", 00:27:48.217 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:48.217 "is_configured": true, 00:27:48.217 "data_offset": 0, 00:27:48.217 "data_size": 65536 00:27:48.217 }, 00:27:48.217 { 00:27:48.217 "name": "BaseBdev3", 00:27:48.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.217 "is_configured": false, 00:27:48.217 "data_offset": 0, 00:27:48.217 "data_size": 0 00:27:48.217 }, 00:27:48.217 { 00:27:48.217 "name": "BaseBdev4", 00:27:48.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.217 "is_configured": false, 00:27:48.217 "data_offset": 0, 00:27:48.217 "data_size": 0 00:27:48.217 } 00:27:48.217 ] 00:27:48.217 }' 00:27:48.217 07:48:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.217 07:48:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.784 [2024-10-07 07:48:48.076892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:48.784 BaseBdev3 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.784 [ 00:27:48.784 { 00:27:48.784 "name": "BaseBdev3", 00:27:48.784 "aliases": [ 00:27:48.784 "1177ffa0-0165-460f-80ea-288852e95338" 00:27:48.784 ], 00:27:48.784 "product_name": "Malloc disk", 00:27:48.784 "block_size": 512, 00:27:48.784 "num_blocks": 65536, 00:27:48.784 "uuid": "1177ffa0-0165-460f-80ea-288852e95338", 00:27:48.784 "assigned_rate_limits": { 00:27:48.784 "rw_ios_per_sec": 0, 00:27:48.784 "rw_mbytes_per_sec": 0, 00:27:48.784 "r_mbytes_per_sec": 0, 00:27:48.784 "w_mbytes_per_sec": 0 00:27:48.784 }, 00:27:48.784 "claimed": true, 00:27:48.784 "claim_type": "exclusive_write", 00:27:48.784 "zoned": false, 00:27:48.784 "supported_io_types": { 00:27:48.784 "read": true, 00:27:48.784 "write": true, 00:27:48.784 "unmap": true, 00:27:48.784 "flush": true, 00:27:48.784 "reset": true, 00:27:48.784 "nvme_admin": false, 00:27:48.784 "nvme_io": false, 00:27:48.784 "nvme_io_md": false, 00:27:48.784 "write_zeroes": true, 00:27:48.784 "zcopy": true, 00:27:48.784 "get_zone_info": false, 00:27:48.784 "zone_management": false, 00:27:48.784 "zone_append": false, 00:27:48.784 "compare": false, 00:27:48.784 "compare_and_write": false, 00:27:48.784 "abort": true, 00:27:48.784 "seek_hole": false, 00:27:48.784 "seek_data": false, 00:27:48.784 "copy": true, 00:27:48.784 "nvme_iov_md": false 00:27:48.784 }, 00:27:48.784 "memory_domains": [ 00:27:48.784 { 00:27:48.784 "dma_device_id": "system", 00:27:48.784 "dma_device_type": 1 00:27:48.784 }, 00:27:48.784 { 00:27:48.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.784 "dma_device_type": 2 00:27:48.784 } 00:27:48.784 ], 00:27:48.784 "driver_specific": {} 00:27:48.784 } 00:27:48.784 ] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:48.784 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.784 "name": "Existed_Raid", 00:27:48.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.784 "strip_size_kb": 64, 00:27:48.784 "state": "configuring", 00:27:48.784 "raid_level": "raid5f", 00:27:48.784 "superblock": false, 00:27:48.784 "num_base_bdevs": 4, 00:27:48.784 "num_base_bdevs_discovered": 3, 00:27:48.784 "num_base_bdevs_operational": 4, 00:27:48.784 "base_bdevs_list": [ 00:27:48.785 { 00:27:48.785 "name": "BaseBdev1", 00:27:48.785 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:48.785 "is_configured": true, 00:27:48.785 "data_offset": 0, 00:27:48.785 "data_size": 65536 00:27:48.785 }, 00:27:48.785 { 00:27:48.785 "name": "BaseBdev2", 00:27:48.785 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:48.785 "is_configured": true, 00:27:48.785 "data_offset": 0, 00:27:48.785 "data_size": 65536 00:27:48.785 }, 00:27:48.785 { 00:27:48.785 "name": "BaseBdev3", 00:27:48.785 "uuid": "1177ffa0-0165-460f-80ea-288852e95338", 00:27:48.785 "is_configured": true, 00:27:48.785 "data_offset": 0, 00:27:48.785 "data_size": 65536 00:27:48.785 }, 00:27:48.785 { 00:27:48.785 "name": "BaseBdev4", 00:27:48.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.785 "is_configured": false, 00:27:48.785 "data_offset": 0, 00:27:48.785 "data_size": 0 00:27:48.785 } 00:27:48.785 ] 00:27:48.785 }' 00:27:48.785 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.785 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.046 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:49.046 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.046 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 [2024-10-07 07:48:48.615903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:49.305 [2024-10-07 07:48:48.615968] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:49.305 [2024-10-07 07:48:48.615987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:49.305 [2024-10-07 07:48:48.616281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:49.305 [2024-10-07 07:48:48.627344] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:49.305 [2024-10-07 07:48:48.627523] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:49.305 [2024-10-07 07:48:48.627918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.305 BaseBdev4 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.305 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.305 [ 00:27:49.305 { 00:27:49.305 "name": "BaseBdev4", 00:27:49.305 "aliases": [ 00:27:49.305 "d04c5b6f-76a1-421f-895e-a33413ba4af2" 00:27:49.305 ], 00:27:49.305 "product_name": "Malloc disk", 00:27:49.305 "block_size": 512, 00:27:49.306 "num_blocks": 65536, 00:27:49.306 "uuid": "d04c5b6f-76a1-421f-895e-a33413ba4af2", 00:27:49.306 "assigned_rate_limits": { 00:27:49.306 "rw_ios_per_sec": 0, 00:27:49.306 "rw_mbytes_per_sec": 0, 00:27:49.306 "r_mbytes_per_sec": 0, 00:27:49.306 "w_mbytes_per_sec": 0 00:27:49.306 }, 00:27:49.306 "claimed": true, 00:27:49.306 "claim_type": "exclusive_write", 00:27:49.306 "zoned": false, 00:27:49.306 "supported_io_types": { 00:27:49.306 "read": true, 00:27:49.306 "write": true, 00:27:49.306 "unmap": true, 00:27:49.306 "flush": true, 00:27:49.306 "reset": true, 00:27:49.306 "nvme_admin": false, 00:27:49.306 "nvme_io": false, 00:27:49.306 "nvme_io_md": false, 00:27:49.306 "write_zeroes": true, 00:27:49.306 "zcopy": true, 00:27:49.306 "get_zone_info": false, 00:27:49.306 "zone_management": false, 00:27:49.306 "zone_append": false, 00:27:49.306 "compare": false, 00:27:49.306 "compare_and_write": false, 00:27:49.306 "abort": true, 00:27:49.306 "seek_hole": false, 00:27:49.306 "seek_data": false, 00:27:49.306 "copy": true, 00:27:49.306 "nvme_iov_md": false 00:27:49.306 }, 00:27:49.306 "memory_domains": [ 00:27:49.306 { 00:27:49.306 "dma_device_id": "system", 00:27:49.306 "dma_device_type": 1 00:27:49.306 }, 00:27:49.306 { 00:27:49.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.306 "dma_device_type": 2 00:27:49.306 } 00:27:49.306 ], 00:27:49.306 "driver_specific": {} 00:27:49.306 } 00:27:49.306 ] 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.306 "name": "Existed_Raid", 00:27:49.306 "uuid": "5bac3ba1-b88e-4644-acbe-5a975df44bf2", 00:27:49.306 "strip_size_kb": 64, 00:27:49.306 "state": "online", 00:27:49.306 "raid_level": "raid5f", 00:27:49.306 "superblock": false, 00:27:49.306 "num_base_bdevs": 4, 00:27:49.306 "num_base_bdevs_discovered": 4, 00:27:49.306 "num_base_bdevs_operational": 4, 00:27:49.306 "base_bdevs_list": [ 00:27:49.306 { 00:27:49.306 "name": "BaseBdev1", 00:27:49.306 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:49.306 "is_configured": true, 00:27:49.306 "data_offset": 0, 00:27:49.306 "data_size": 65536 00:27:49.306 }, 00:27:49.306 { 00:27:49.306 "name": "BaseBdev2", 00:27:49.306 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:49.306 "is_configured": true, 00:27:49.306 "data_offset": 0, 00:27:49.306 "data_size": 65536 00:27:49.306 }, 00:27:49.306 { 00:27:49.306 "name": "BaseBdev3", 00:27:49.306 "uuid": "1177ffa0-0165-460f-80ea-288852e95338", 00:27:49.306 "is_configured": true, 00:27:49.306 "data_offset": 0, 00:27:49.306 "data_size": 65536 00:27:49.306 }, 00:27:49.306 { 00:27:49.306 "name": "BaseBdev4", 00:27:49.306 "uuid": "d04c5b6f-76a1-421f-895e-a33413ba4af2", 00:27:49.306 "is_configured": true, 00:27:49.306 "data_offset": 0, 00:27:49.306 "data_size": 65536 00:27:49.306 } 00:27:49.306 ] 00:27:49.306 }' 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.306 07:48:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.565 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.825 [2024-10-07 07:48:49.125651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.825 "name": "Existed_Raid", 00:27:49.825 "aliases": [ 00:27:49.825 "5bac3ba1-b88e-4644-acbe-5a975df44bf2" 00:27:49.825 ], 00:27:49.825 "product_name": "Raid Volume", 00:27:49.825 "block_size": 512, 00:27:49.825 "num_blocks": 196608, 00:27:49.825 "uuid": "5bac3ba1-b88e-4644-acbe-5a975df44bf2", 00:27:49.825 "assigned_rate_limits": { 00:27:49.825 "rw_ios_per_sec": 0, 00:27:49.825 "rw_mbytes_per_sec": 0, 00:27:49.825 "r_mbytes_per_sec": 0, 00:27:49.825 "w_mbytes_per_sec": 0 00:27:49.825 }, 00:27:49.825 "claimed": false, 00:27:49.825 "zoned": false, 00:27:49.825 "supported_io_types": { 00:27:49.825 "read": true, 00:27:49.825 "write": true, 00:27:49.825 "unmap": false, 00:27:49.825 "flush": false, 00:27:49.825 "reset": true, 00:27:49.825 "nvme_admin": false, 00:27:49.825 "nvme_io": false, 00:27:49.825 "nvme_io_md": false, 00:27:49.825 "write_zeroes": true, 00:27:49.825 "zcopy": false, 00:27:49.825 "get_zone_info": false, 00:27:49.825 "zone_management": false, 00:27:49.825 "zone_append": false, 00:27:49.825 "compare": false, 00:27:49.825 "compare_and_write": false, 00:27:49.825 "abort": false, 00:27:49.825 "seek_hole": false, 00:27:49.825 "seek_data": false, 00:27:49.825 "copy": false, 00:27:49.825 "nvme_iov_md": false 00:27:49.825 }, 00:27:49.825 "driver_specific": { 00:27:49.825 "raid": { 00:27:49.825 "uuid": "5bac3ba1-b88e-4644-acbe-5a975df44bf2", 00:27:49.825 "strip_size_kb": 64, 00:27:49.825 "state": "online", 00:27:49.825 "raid_level": "raid5f", 00:27:49.825 "superblock": false, 00:27:49.825 "num_base_bdevs": 4, 00:27:49.825 "num_base_bdevs_discovered": 4, 00:27:49.825 "num_base_bdevs_operational": 4, 00:27:49.825 "base_bdevs_list": [ 00:27:49.825 { 00:27:49.825 "name": "BaseBdev1", 00:27:49.825 "uuid": "63a3aa43-10a6-4187-bf64-4df9d07f99cb", 00:27:49.825 "is_configured": true, 00:27:49.825 "data_offset": 0, 00:27:49.825 "data_size": 65536 00:27:49.825 }, 00:27:49.825 { 00:27:49.825 "name": "BaseBdev2", 00:27:49.825 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:49.825 "is_configured": true, 00:27:49.825 "data_offset": 0, 00:27:49.825 "data_size": 65536 00:27:49.825 }, 00:27:49.825 { 00:27:49.825 "name": "BaseBdev3", 00:27:49.825 "uuid": "1177ffa0-0165-460f-80ea-288852e95338", 00:27:49.825 "is_configured": true, 00:27:49.825 "data_offset": 0, 00:27:49.825 "data_size": 65536 00:27:49.825 }, 00:27:49.825 { 00:27:49.825 "name": "BaseBdev4", 00:27:49.825 "uuid": "d04c5b6f-76a1-421f-895e-a33413ba4af2", 00:27:49.825 "is_configured": true, 00:27:49.825 "data_offset": 0, 00:27:49.825 "data_size": 65536 00:27:49.825 } 00:27:49.825 ] 00:27:49.825 } 00:27:49.825 } 00:27:49.825 }' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:49.825 BaseBdev2 00:27:49.825 BaseBdev3 00:27:49.825 BaseBdev4' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.825 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.826 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.085 [2024-10-07 07:48:49.465597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.085 "name": "Existed_Raid", 00:27:50.085 "uuid": "5bac3ba1-b88e-4644-acbe-5a975df44bf2", 00:27:50.085 "strip_size_kb": 64, 00:27:50.085 "state": "online", 00:27:50.085 "raid_level": "raid5f", 00:27:50.085 "superblock": false, 00:27:50.085 "num_base_bdevs": 4, 00:27:50.085 "num_base_bdevs_discovered": 3, 00:27:50.085 "num_base_bdevs_operational": 3, 00:27:50.085 "base_bdevs_list": [ 00:27:50.085 { 00:27:50.085 "name": null, 00:27:50.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.085 "is_configured": false, 00:27:50.085 "data_offset": 0, 00:27:50.085 "data_size": 65536 00:27:50.085 }, 00:27:50.085 { 00:27:50.085 "name": "BaseBdev2", 00:27:50.085 "uuid": "3ba8de8b-c807-4524-864f-8f7da81fa673", 00:27:50.085 "is_configured": true, 00:27:50.085 "data_offset": 0, 00:27:50.085 "data_size": 65536 00:27:50.085 }, 00:27:50.085 { 00:27:50.085 "name": "BaseBdev3", 00:27:50.085 "uuid": "1177ffa0-0165-460f-80ea-288852e95338", 00:27:50.085 "is_configured": true, 00:27:50.085 "data_offset": 0, 00:27:50.085 "data_size": 65536 00:27:50.085 }, 00:27:50.085 { 00:27:50.085 "name": "BaseBdev4", 00:27:50.085 "uuid": "d04c5b6f-76a1-421f-895e-a33413ba4af2", 00:27:50.085 "is_configured": true, 00:27:50.085 "data_offset": 0, 00:27:50.085 "data_size": 65536 00:27:50.085 } 00:27:50.085 ] 00:27:50.085 }' 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.085 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:50.654 07:48:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.654 [2024-10-07 07:48:50.035227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:50.654 [2024-10-07 07:48:50.035476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.654 [2024-10-07 07:48:50.133505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.654 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.654 [2024-10-07 07:48:50.209554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:50.913 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.913 [2024-10-07 07:48:50.383505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:50.913 [2024-10-07 07:48:50.383580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.172 BaseBdev2 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.172 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.173 [ 00:27:51.173 { 00:27:51.173 "name": "BaseBdev2", 00:27:51.173 "aliases": [ 00:27:51.173 "37ccf185-7068-4d93-9731-6b1af1f0f620" 00:27:51.173 ], 00:27:51.173 "product_name": "Malloc disk", 00:27:51.173 "block_size": 512, 00:27:51.173 "num_blocks": 65536, 00:27:51.173 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:51.173 "assigned_rate_limits": { 00:27:51.173 "rw_ios_per_sec": 0, 00:27:51.173 "rw_mbytes_per_sec": 0, 00:27:51.173 "r_mbytes_per_sec": 0, 00:27:51.173 "w_mbytes_per_sec": 0 00:27:51.173 }, 00:27:51.173 "claimed": false, 00:27:51.173 "zoned": false, 00:27:51.173 "supported_io_types": { 00:27:51.173 "read": true, 00:27:51.173 "write": true, 00:27:51.173 "unmap": true, 00:27:51.173 "flush": true, 00:27:51.173 "reset": true, 00:27:51.173 "nvme_admin": false, 00:27:51.173 "nvme_io": false, 00:27:51.173 "nvme_io_md": false, 00:27:51.173 "write_zeroes": true, 00:27:51.173 "zcopy": true, 00:27:51.173 "get_zone_info": false, 00:27:51.173 "zone_management": false, 00:27:51.173 "zone_append": false, 00:27:51.173 "compare": false, 00:27:51.173 "compare_and_write": false, 00:27:51.173 "abort": true, 00:27:51.173 "seek_hole": false, 00:27:51.173 "seek_data": false, 00:27:51.173 "copy": true, 00:27:51.173 "nvme_iov_md": false 00:27:51.173 }, 00:27:51.173 "memory_domains": [ 00:27:51.173 { 00:27:51.173 "dma_device_id": "system", 00:27:51.173 "dma_device_type": 1 00:27:51.173 }, 00:27:51.173 { 00:27:51.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.173 "dma_device_type": 2 00:27:51.173 } 00:27:51.173 ], 00:27:51.173 "driver_specific": {} 00:27:51.173 } 00:27:51.173 ] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.173 BaseBdev3 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.173 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.432 [ 00:27:51.432 { 00:27:51.432 "name": "BaseBdev3", 00:27:51.432 "aliases": [ 00:27:51.432 "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a" 00:27:51.432 ], 00:27:51.432 "product_name": "Malloc disk", 00:27:51.432 "block_size": 512, 00:27:51.432 "num_blocks": 65536, 00:27:51.432 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:51.432 "assigned_rate_limits": { 00:27:51.432 "rw_ios_per_sec": 0, 00:27:51.432 "rw_mbytes_per_sec": 0, 00:27:51.432 "r_mbytes_per_sec": 0, 00:27:51.432 "w_mbytes_per_sec": 0 00:27:51.432 }, 00:27:51.432 "claimed": false, 00:27:51.432 "zoned": false, 00:27:51.432 "supported_io_types": { 00:27:51.432 "read": true, 00:27:51.432 "write": true, 00:27:51.432 "unmap": true, 00:27:51.432 "flush": true, 00:27:51.432 "reset": true, 00:27:51.432 "nvme_admin": false, 00:27:51.432 "nvme_io": false, 00:27:51.432 "nvme_io_md": false, 00:27:51.432 "write_zeroes": true, 00:27:51.432 "zcopy": true, 00:27:51.432 "get_zone_info": false, 00:27:51.432 "zone_management": false, 00:27:51.432 "zone_append": false, 00:27:51.432 "compare": false, 00:27:51.432 "compare_and_write": false, 00:27:51.432 "abort": true, 00:27:51.432 "seek_hole": false, 00:27:51.432 "seek_data": false, 00:27:51.432 "copy": true, 00:27:51.432 "nvme_iov_md": false 00:27:51.432 }, 00:27:51.432 "memory_domains": [ 00:27:51.432 { 00:27:51.432 "dma_device_id": "system", 00:27:51.432 "dma_device_type": 1 00:27:51.432 }, 00:27:51.432 { 00:27:51.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.432 "dma_device_type": 2 00:27:51.432 } 00:27:51.432 ], 00:27:51.432 "driver_specific": {} 00:27:51.432 } 00:27:51.432 ] 00:27:51.432 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.432 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:51.432 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:51.432 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.433 BaseBdev4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.433 [ 00:27:51.433 { 00:27:51.433 "name": "BaseBdev4", 00:27:51.433 "aliases": [ 00:27:51.433 "7d758dcc-bb40-48fc-a240-99f97e5cefbc" 00:27:51.433 ], 00:27:51.433 "product_name": "Malloc disk", 00:27:51.433 "block_size": 512, 00:27:51.433 "num_blocks": 65536, 00:27:51.433 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:51.433 "assigned_rate_limits": { 00:27:51.433 "rw_ios_per_sec": 0, 00:27:51.433 "rw_mbytes_per_sec": 0, 00:27:51.433 "r_mbytes_per_sec": 0, 00:27:51.433 "w_mbytes_per_sec": 0 00:27:51.433 }, 00:27:51.433 "claimed": false, 00:27:51.433 "zoned": false, 00:27:51.433 "supported_io_types": { 00:27:51.433 "read": true, 00:27:51.433 "write": true, 00:27:51.433 "unmap": true, 00:27:51.433 "flush": true, 00:27:51.433 "reset": true, 00:27:51.433 "nvme_admin": false, 00:27:51.433 "nvme_io": false, 00:27:51.433 "nvme_io_md": false, 00:27:51.433 "write_zeroes": true, 00:27:51.433 "zcopy": true, 00:27:51.433 "get_zone_info": false, 00:27:51.433 "zone_management": false, 00:27:51.433 "zone_append": false, 00:27:51.433 "compare": false, 00:27:51.433 "compare_and_write": false, 00:27:51.433 "abort": true, 00:27:51.433 "seek_hole": false, 00:27:51.433 "seek_data": false, 00:27:51.433 "copy": true, 00:27:51.433 "nvme_iov_md": false 00:27:51.433 }, 00:27:51.433 "memory_domains": [ 00:27:51.433 { 00:27:51.433 "dma_device_id": "system", 00:27:51.433 "dma_device_type": 1 00:27:51.433 }, 00:27:51.433 { 00:27:51.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.433 "dma_device_type": 2 00:27:51.433 } 00:27:51.433 ], 00:27:51.433 "driver_specific": {} 00:27:51.433 } 00:27:51.433 ] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.433 [2024-10-07 07:48:50.833496] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:51.433 [2024-10-07 07:48:50.833671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:51.433 [2024-10-07 07:48:50.833741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:51.433 [2024-10-07 07:48:50.835984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:51.433 [2024-10-07 07:48:50.836039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.433 "name": "Existed_Raid", 00:27:51.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.433 "strip_size_kb": 64, 00:27:51.433 "state": "configuring", 00:27:51.433 "raid_level": "raid5f", 00:27:51.433 "superblock": false, 00:27:51.433 "num_base_bdevs": 4, 00:27:51.433 "num_base_bdevs_discovered": 3, 00:27:51.433 "num_base_bdevs_operational": 4, 00:27:51.433 "base_bdevs_list": [ 00:27:51.433 { 00:27:51.433 "name": "BaseBdev1", 00:27:51.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.433 "is_configured": false, 00:27:51.433 "data_offset": 0, 00:27:51.433 "data_size": 0 00:27:51.433 }, 00:27:51.433 { 00:27:51.433 "name": "BaseBdev2", 00:27:51.433 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:51.433 "is_configured": true, 00:27:51.433 "data_offset": 0, 00:27:51.433 "data_size": 65536 00:27:51.433 }, 00:27:51.433 { 00:27:51.433 "name": "BaseBdev3", 00:27:51.433 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:51.433 "is_configured": true, 00:27:51.433 "data_offset": 0, 00:27:51.433 "data_size": 65536 00:27:51.433 }, 00:27:51.433 { 00:27:51.433 "name": "BaseBdev4", 00:27:51.433 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:51.433 "is_configured": true, 00:27:51.433 "data_offset": 0, 00:27:51.433 "data_size": 65536 00:27:51.433 } 00:27:51.433 ] 00:27:51.433 }' 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.433 07:48:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.692 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:51.692 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.692 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.951 [2024-10-07 07:48:51.257590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:51.951 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.951 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.952 "name": "Existed_Raid", 00:27:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.952 "strip_size_kb": 64, 00:27:51.952 "state": "configuring", 00:27:51.952 "raid_level": "raid5f", 00:27:51.952 "superblock": false, 00:27:51.952 "num_base_bdevs": 4, 00:27:51.952 "num_base_bdevs_discovered": 2, 00:27:51.952 "num_base_bdevs_operational": 4, 00:27:51.952 "base_bdevs_list": [ 00:27:51.952 { 00:27:51.952 "name": "BaseBdev1", 00:27:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.952 "is_configured": false, 00:27:51.952 "data_offset": 0, 00:27:51.952 "data_size": 0 00:27:51.952 }, 00:27:51.952 { 00:27:51.952 "name": null, 00:27:51.952 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:51.952 "is_configured": false, 00:27:51.952 "data_offset": 0, 00:27:51.952 "data_size": 65536 00:27:51.952 }, 00:27:51.952 { 00:27:51.952 "name": "BaseBdev3", 00:27:51.952 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:51.952 "is_configured": true, 00:27:51.952 "data_offset": 0, 00:27:51.952 "data_size": 65536 00:27:51.952 }, 00:27:51.952 { 00:27:51.952 "name": "BaseBdev4", 00:27:51.952 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:51.952 "is_configured": true, 00:27:51.952 "data_offset": 0, 00:27:51.952 "data_size": 65536 00:27:51.952 } 00:27:51.952 ] 00:27:51.952 }' 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.952 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.211 [2024-10-07 07:48:51.763970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:52.211 BaseBdev1 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.211 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.468 [ 00:27:52.468 { 00:27:52.468 "name": "BaseBdev1", 00:27:52.468 "aliases": [ 00:27:52.468 "8829d176-2e8c-435a-9a31-64856f9d81c3" 00:27:52.468 ], 00:27:52.468 "product_name": "Malloc disk", 00:27:52.468 "block_size": 512, 00:27:52.468 "num_blocks": 65536, 00:27:52.468 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:52.468 "assigned_rate_limits": { 00:27:52.468 "rw_ios_per_sec": 0, 00:27:52.468 "rw_mbytes_per_sec": 0, 00:27:52.468 "r_mbytes_per_sec": 0, 00:27:52.468 "w_mbytes_per_sec": 0 00:27:52.468 }, 00:27:52.468 "claimed": true, 00:27:52.468 "claim_type": "exclusive_write", 00:27:52.468 "zoned": false, 00:27:52.468 "supported_io_types": { 00:27:52.468 "read": true, 00:27:52.468 "write": true, 00:27:52.468 "unmap": true, 00:27:52.468 "flush": true, 00:27:52.468 "reset": true, 00:27:52.468 "nvme_admin": false, 00:27:52.468 "nvme_io": false, 00:27:52.468 "nvme_io_md": false, 00:27:52.468 "write_zeroes": true, 00:27:52.468 "zcopy": true, 00:27:52.468 "get_zone_info": false, 00:27:52.468 "zone_management": false, 00:27:52.468 "zone_append": false, 00:27:52.468 "compare": false, 00:27:52.468 "compare_and_write": false, 00:27:52.468 "abort": true, 00:27:52.468 "seek_hole": false, 00:27:52.468 "seek_data": false, 00:27:52.468 "copy": true, 00:27:52.468 "nvme_iov_md": false 00:27:52.468 }, 00:27:52.468 "memory_domains": [ 00:27:52.468 { 00:27:52.468 "dma_device_id": "system", 00:27:52.468 "dma_device_type": 1 00:27:52.468 }, 00:27:52.468 { 00:27:52.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.468 "dma_device_type": 2 00:27:52.468 } 00:27:52.468 ], 00:27:52.468 "driver_specific": {} 00:27:52.468 } 00:27:52.468 ] 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.468 "name": "Existed_Raid", 00:27:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.468 "strip_size_kb": 64, 00:27:52.468 "state": "configuring", 00:27:52.468 "raid_level": "raid5f", 00:27:52.468 "superblock": false, 00:27:52.468 "num_base_bdevs": 4, 00:27:52.468 "num_base_bdevs_discovered": 3, 00:27:52.468 "num_base_bdevs_operational": 4, 00:27:52.468 "base_bdevs_list": [ 00:27:52.468 { 00:27:52.468 "name": "BaseBdev1", 00:27:52.468 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:52.468 "is_configured": true, 00:27:52.468 "data_offset": 0, 00:27:52.468 "data_size": 65536 00:27:52.468 }, 00:27:52.468 { 00:27:52.468 "name": null, 00:27:52.468 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:52.468 "is_configured": false, 00:27:52.468 "data_offset": 0, 00:27:52.468 "data_size": 65536 00:27:52.468 }, 00:27:52.468 { 00:27:52.468 "name": "BaseBdev3", 00:27:52.468 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:52.468 "is_configured": true, 00:27:52.468 "data_offset": 0, 00:27:52.468 "data_size": 65536 00:27:52.468 }, 00:27:52.468 { 00:27:52.468 "name": "BaseBdev4", 00:27:52.468 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:52.468 "is_configured": true, 00:27:52.468 "data_offset": 0, 00:27:52.468 "data_size": 65536 00:27:52.468 } 00:27:52.468 ] 00:27:52.468 }' 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.468 07:48:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.726 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:52.726 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.726 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.726 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.726 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.985 [2024-10-07 07:48:52.296159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.985 "name": "Existed_Raid", 00:27:52.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.985 "strip_size_kb": 64, 00:27:52.985 "state": "configuring", 00:27:52.985 "raid_level": "raid5f", 00:27:52.985 "superblock": false, 00:27:52.985 "num_base_bdevs": 4, 00:27:52.985 "num_base_bdevs_discovered": 2, 00:27:52.985 "num_base_bdevs_operational": 4, 00:27:52.985 "base_bdevs_list": [ 00:27:52.985 { 00:27:52.985 "name": "BaseBdev1", 00:27:52.985 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:52.985 "is_configured": true, 00:27:52.985 "data_offset": 0, 00:27:52.985 "data_size": 65536 00:27:52.985 }, 00:27:52.985 { 00:27:52.985 "name": null, 00:27:52.985 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:52.985 "is_configured": false, 00:27:52.985 "data_offset": 0, 00:27:52.985 "data_size": 65536 00:27:52.985 }, 00:27:52.985 { 00:27:52.985 "name": null, 00:27:52.985 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:52.985 "is_configured": false, 00:27:52.985 "data_offset": 0, 00:27:52.985 "data_size": 65536 00:27:52.985 }, 00:27:52.985 { 00:27:52.985 "name": "BaseBdev4", 00:27:52.985 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:52.985 "is_configured": true, 00:27:52.985 "data_offset": 0, 00:27:52.985 "data_size": 65536 00:27:52.985 } 00:27:52.985 ] 00:27:52.985 }' 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.985 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 [2024-10-07 07:48:52.788327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.500 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:53.500 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.500 "name": "Existed_Raid", 00:27:53.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.500 "strip_size_kb": 64, 00:27:53.500 "state": "configuring", 00:27:53.500 "raid_level": "raid5f", 00:27:53.500 "superblock": false, 00:27:53.500 "num_base_bdevs": 4, 00:27:53.500 "num_base_bdevs_discovered": 3, 00:27:53.500 "num_base_bdevs_operational": 4, 00:27:53.500 "base_bdevs_list": [ 00:27:53.500 { 00:27:53.500 "name": "BaseBdev1", 00:27:53.500 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:53.500 "is_configured": true, 00:27:53.500 "data_offset": 0, 00:27:53.500 "data_size": 65536 00:27:53.500 }, 00:27:53.500 { 00:27:53.500 "name": null, 00:27:53.500 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:53.500 "is_configured": false, 00:27:53.500 "data_offset": 0, 00:27:53.500 "data_size": 65536 00:27:53.500 }, 00:27:53.500 { 00:27:53.500 "name": "BaseBdev3", 00:27:53.500 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:53.500 "is_configured": true, 00:27:53.500 "data_offset": 0, 00:27:53.500 "data_size": 65536 00:27:53.500 }, 00:27:53.500 { 00:27:53.500 "name": "BaseBdev4", 00:27:53.500 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:53.500 "is_configured": true, 00:27:53.500 "data_offset": 0, 00:27:53.500 "data_size": 65536 00:27:53.500 } 00:27:53.500 ] 00:27:53.500 }' 00:27:53.500 07:48:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.500 07:48:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:53.758 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.758 [2024-10-07 07:48:53.300412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.023 "name": "Existed_Raid", 00:27:54.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.023 "strip_size_kb": 64, 00:27:54.023 "state": "configuring", 00:27:54.023 "raid_level": "raid5f", 00:27:54.023 "superblock": false, 00:27:54.023 "num_base_bdevs": 4, 00:27:54.023 "num_base_bdevs_discovered": 2, 00:27:54.023 "num_base_bdevs_operational": 4, 00:27:54.023 "base_bdevs_list": [ 00:27:54.023 { 00:27:54.023 "name": null, 00:27:54.023 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:54.023 "is_configured": false, 00:27:54.023 "data_offset": 0, 00:27:54.023 "data_size": 65536 00:27:54.023 }, 00:27:54.023 { 00:27:54.023 "name": null, 00:27:54.023 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:54.023 "is_configured": false, 00:27:54.023 "data_offset": 0, 00:27:54.023 "data_size": 65536 00:27:54.023 }, 00:27:54.023 { 00:27:54.023 "name": "BaseBdev3", 00:27:54.023 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:54.023 "is_configured": true, 00:27:54.023 "data_offset": 0, 00:27:54.023 "data_size": 65536 00:27:54.023 }, 00:27:54.023 { 00:27:54.023 "name": "BaseBdev4", 00:27:54.023 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:54.023 "is_configured": true, 00:27:54.023 "data_offset": 0, 00:27:54.023 "data_size": 65536 00:27:54.023 } 00:27:54.023 ] 00:27:54.023 }' 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.023 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.308 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:54.308 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.308 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.568 [2024-10-07 07:48:53.899297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.568 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.569 "name": "Existed_Raid", 00:27:54.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.569 "strip_size_kb": 64, 00:27:54.569 "state": "configuring", 00:27:54.569 "raid_level": "raid5f", 00:27:54.569 "superblock": false, 00:27:54.569 "num_base_bdevs": 4, 00:27:54.569 "num_base_bdevs_discovered": 3, 00:27:54.569 "num_base_bdevs_operational": 4, 00:27:54.569 "base_bdevs_list": [ 00:27:54.569 { 00:27:54.569 "name": null, 00:27:54.569 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:54.569 "is_configured": false, 00:27:54.569 "data_offset": 0, 00:27:54.569 "data_size": 65536 00:27:54.569 }, 00:27:54.569 { 00:27:54.569 "name": "BaseBdev2", 00:27:54.569 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:54.569 "is_configured": true, 00:27:54.569 "data_offset": 0, 00:27:54.569 "data_size": 65536 00:27:54.569 }, 00:27:54.569 { 00:27:54.569 "name": "BaseBdev3", 00:27:54.569 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:54.569 "is_configured": true, 00:27:54.569 "data_offset": 0, 00:27:54.569 "data_size": 65536 00:27:54.569 }, 00:27:54.569 { 00:27:54.569 "name": "BaseBdev4", 00:27:54.569 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:54.569 "is_configured": true, 00:27:54.569 "data_offset": 0, 00:27:54.569 "data_size": 65536 00:27:54.569 } 00:27:54.569 ] 00:27:54.569 }' 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.569 07:48:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.828 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.828 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:54.828 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:54.828 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.088 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8829d176-2e8c-435a-9a31-64856f9d81c3 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 [2024-10-07 07:48:54.502260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:55.089 [2024-10-07 07:48:54.502329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:55.089 [2024-10-07 07:48:54.502339] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:55.089 [2024-10-07 07:48:54.502632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:55.089 [2024-10-07 07:48:54.511696] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:55.089 [2024-10-07 07:48:54.511748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:55.089 NewBaseBdev 00:27:55.089 [2024-10-07 07:48:54.512057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local i 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 [ 00:27:55.089 { 00:27:55.089 "name": "NewBaseBdev", 00:27:55.089 "aliases": [ 00:27:55.089 "8829d176-2e8c-435a-9a31-64856f9d81c3" 00:27:55.089 ], 00:27:55.089 "product_name": "Malloc disk", 00:27:55.089 "block_size": 512, 00:27:55.089 "num_blocks": 65536, 00:27:55.089 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:55.089 "assigned_rate_limits": { 00:27:55.089 "rw_ios_per_sec": 0, 00:27:55.089 "rw_mbytes_per_sec": 0, 00:27:55.089 "r_mbytes_per_sec": 0, 00:27:55.089 "w_mbytes_per_sec": 0 00:27:55.089 }, 00:27:55.089 "claimed": true, 00:27:55.089 "claim_type": "exclusive_write", 00:27:55.089 "zoned": false, 00:27:55.089 "supported_io_types": { 00:27:55.089 "read": true, 00:27:55.089 "write": true, 00:27:55.089 "unmap": true, 00:27:55.089 "flush": true, 00:27:55.089 "reset": true, 00:27:55.089 "nvme_admin": false, 00:27:55.089 "nvme_io": false, 00:27:55.089 "nvme_io_md": false, 00:27:55.089 "write_zeroes": true, 00:27:55.089 "zcopy": true, 00:27:55.089 "get_zone_info": false, 00:27:55.089 "zone_management": false, 00:27:55.089 "zone_append": false, 00:27:55.089 "compare": false, 00:27:55.089 "compare_and_write": false, 00:27:55.089 "abort": true, 00:27:55.089 "seek_hole": false, 00:27:55.089 "seek_data": false, 00:27:55.089 "copy": true, 00:27:55.089 "nvme_iov_md": false 00:27:55.089 }, 00:27:55.089 "memory_domains": [ 00:27:55.089 { 00:27:55.089 "dma_device_id": "system", 00:27:55.089 "dma_device_type": 1 00:27:55.089 }, 00:27:55.089 { 00:27:55.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.089 "dma_device_type": 2 00:27:55.089 } 00:27:55.089 ], 00:27:55.089 "driver_specific": {} 00:27:55.089 } 00:27:55.089 ] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # return 0 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.089 "name": "Existed_Raid", 00:27:55.089 "uuid": "e116b259-3a7b-4438-a773-450f1a8de526", 00:27:55.089 "strip_size_kb": 64, 00:27:55.089 "state": "online", 00:27:55.089 "raid_level": "raid5f", 00:27:55.089 "superblock": false, 00:27:55.089 "num_base_bdevs": 4, 00:27:55.089 "num_base_bdevs_discovered": 4, 00:27:55.089 "num_base_bdevs_operational": 4, 00:27:55.089 "base_bdevs_list": [ 00:27:55.089 { 00:27:55.089 "name": "NewBaseBdev", 00:27:55.089 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:55.089 "is_configured": true, 00:27:55.089 "data_offset": 0, 00:27:55.089 "data_size": 65536 00:27:55.089 }, 00:27:55.089 { 00:27:55.089 "name": "BaseBdev2", 00:27:55.089 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:55.089 "is_configured": true, 00:27:55.089 "data_offset": 0, 00:27:55.089 "data_size": 65536 00:27:55.089 }, 00:27:55.089 { 00:27:55.089 "name": "BaseBdev3", 00:27:55.089 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:55.089 "is_configured": true, 00:27:55.089 "data_offset": 0, 00:27:55.089 "data_size": 65536 00:27:55.089 }, 00:27:55.089 { 00:27:55.089 "name": "BaseBdev4", 00:27:55.089 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:55.089 "is_configured": true, 00:27:55.089 "data_offset": 0, 00:27:55.089 "data_size": 65536 00:27:55.089 } 00:27:55.089 ] 00:27:55.089 }' 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.089 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.659 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:55.659 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:55.660 07:48:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.660 [2024-10-07 07:48:55.006425] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.660 "name": "Existed_Raid", 00:27:55.660 "aliases": [ 00:27:55.660 "e116b259-3a7b-4438-a773-450f1a8de526" 00:27:55.660 ], 00:27:55.660 "product_name": "Raid Volume", 00:27:55.660 "block_size": 512, 00:27:55.660 "num_blocks": 196608, 00:27:55.660 "uuid": "e116b259-3a7b-4438-a773-450f1a8de526", 00:27:55.660 "assigned_rate_limits": { 00:27:55.660 "rw_ios_per_sec": 0, 00:27:55.660 "rw_mbytes_per_sec": 0, 00:27:55.660 "r_mbytes_per_sec": 0, 00:27:55.660 "w_mbytes_per_sec": 0 00:27:55.660 }, 00:27:55.660 "claimed": false, 00:27:55.660 "zoned": false, 00:27:55.660 "supported_io_types": { 00:27:55.660 "read": true, 00:27:55.660 "write": true, 00:27:55.660 "unmap": false, 00:27:55.660 "flush": false, 00:27:55.660 "reset": true, 00:27:55.660 "nvme_admin": false, 00:27:55.660 "nvme_io": false, 00:27:55.660 "nvme_io_md": false, 00:27:55.660 "write_zeroes": true, 00:27:55.660 "zcopy": false, 00:27:55.660 "get_zone_info": false, 00:27:55.660 "zone_management": false, 00:27:55.660 "zone_append": false, 00:27:55.660 "compare": false, 00:27:55.660 "compare_and_write": false, 00:27:55.660 "abort": false, 00:27:55.660 "seek_hole": false, 00:27:55.660 "seek_data": false, 00:27:55.660 "copy": false, 00:27:55.660 "nvme_iov_md": false 00:27:55.660 }, 00:27:55.660 "driver_specific": { 00:27:55.660 "raid": { 00:27:55.660 "uuid": "e116b259-3a7b-4438-a773-450f1a8de526", 00:27:55.660 "strip_size_kb": 64, 00:27:55.660 "state": "online", 00:27:55.660 "raid_level": "raid5f", 00:27:55.660 "superblock": false, 00:27:55.660 "num_base_bdevs": 4, 00:27:55.660 "num_base_bdevs_discovered": 4, 00:27:55.660 "num_base_bdevs_operational": 4, 00:27:55.660 "base_bdevs_list": [ 00:27:55.660 { 00:27:55.660 "name": "NewBaseBdev", 00:27:55.660 "uuid": "8829d176-2e8c-435a-9a31-64856f9d81c3", 00:27:55.660 "is_configured": true, 00:27:55.660 "data_offset": 0, 00:27:55.660 "data_size": 65536 00:27:55.660 }, 00:27:55.660 { 00:27:55.660 "name": "BaseBdev2", 00:27:55.660 "uuid": "37ccf185-7068-4d93-9731-6b1af1f0f620", 00:27:55.660 "is_configured": true, 00:27:55.660 "data_offset": 0, 00:27:55.660 "data_size": 65536 00:27:55.660 }, 00:27:55.660 { 00:27:55.660 "name": "BaseBdev3", 00:27:55.660 "uuid": "e7c75b5a-a33d-4db4-8c3a-2f28f79eec5a", 00:27:55.660 "is_configured": true, 00:27:55.660 "data_offset": 0, 00:27:55.660 "data_size": 65536 00:27:55.660 }, 00:27:55.660 { 00:27:55.660 "name": "BaseBdev4", 00:27:55.660 "uuid": "7d758dcc-bb40-48fc-a240-99f97e5cefbc", 00:27:55.660 "is_configured": true, 00:27:55.660 "data_offset": 0, 00:27:55.660 "data_size": 65536 00:27:55.660 } 00:27:55.660 ] 00:27:55.660 } 00:27:55.660 } 00:27:55.660 }' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:55.660 BaseBdev2 00:27:55.660 BaseBdev3 00:27:55.660 BaseBdev4' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.660 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.920 [2024-10-07 07:48:55.354254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:55.920 [2024-10-07 07:48:55.354389] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.920 [2024-10-07 07:48:55.354632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.920 [2024-10-07 07:48:55.355027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.920 [2024-10-07 07:48:55.355051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83022 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' -z 83022 ']' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # kill -0 83022 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # uname 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 83022 00:27:55.920 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:27:55.921 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:27:55.921 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 83022' 00:27:55.921 killing process with pid 83022 00:27:55.921 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # kill 83022 00:27:55.921 [2024-10-07 07:48:55.400780] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:55.921 07:48:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@977 -- # wait 83022 00:27:56.489 [2024-10-07 07:48:55.824701] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:57.869 00:27:57.869 real 0m11.987s 00:27:57.869 user 0m18.826s 00:27:57.869 sys 0m2.235s 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.869 ************************************ 00:27:57.869 END TEST raid5f_state_function_test 00:27:57.869 ************************************ 00:27:57.869 07:48:57 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:27:57.869 07:48:57 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:27:57.869 07:48:57 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:27:57.869 07:48:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:57.869 ************************************ 00:27:57.869 START TEST raid5f_state_function_test_sb 00:27:57.869 ************************************ 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # raid_state_function_test raid5f 4 true 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:57.869 Process raid pid: 83694 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83694 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83694' 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83694 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # '[' -z 83694 ']' 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:27:57.869 07:48:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:57.869 [2024-10-07 07:48:57.335462] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:27:57.869 [2024-10-07 07:48:57.335855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.129 [2024-10-07 07:48:57.509392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.388 [2024-10-07 07:48:57.804414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.647 [2024-10-07 07:48:58.023811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:58.647 [2024-10-07 07:48:58.023984] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@867 -- # return 0 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.647 [2024-10-07 07:48:58.178283] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:58.647 [2024-10-07 07:48:58.178347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:58.647 [2024-10-07 07:48:58.178359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:58.647 [2024-10-07 07:48:58.178372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:58.647 [2024-10-07 07:48:58.178381] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:58.647 [2024-10-07 07:48:58.178393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:58.647 [2024-10-07 07:48:58.178400] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:58.647 [2024-10-07 07:48:58.178413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.647 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.907 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:58.907 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.907 "name": "Existed_Raid", 00:27:58.907 "uuid": "4fbdd4f8-c60a-4313-b09b-b63d58815772", 00:27:58.907 "strip_size_kb": 64, 00:27:58.907 "state": "configuring", 00:27:58.907 "raid_level": "raid5f", 00:27:58.907 "superblock": true, 00:27:58.907 "num_base_bdevs": 4, 00:27:58.907 "num_base_bdevs_discovered": 0, 00:27:58.907 "num_base_bdevs_operational": 4, 00:27:58.907 "base_bdevs_list": [ 00:27:58.907 { 00:27:58.907 "name": "BaseBdev1", 00:27:58.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.907 "is_configured": false, 00:27:58.907 "data_offset": 0, 00:27:58.907 "data_size": 0 00:27:58.907 }, 00:27:58.907 { 00:27:58.907 "name": "BaseBdev2", 00:27:58.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.907 "is_configured": false, 00:27:58.907 "data_offset": 0, 00:27:58.907 "data_size": 0 00:27:58.907 }, 00:27:58.907 { 00:27:58.907 "name": "BaseBdev3", 00:27:58.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.907 "is_configured": false, 00:27:58.907 "data_offset": 0, 00:27:58.907 "data_size": 0 00:27:58.907 }, 00:27:58.907 { 00:27:58.907 "name": "BaseBdev4", 00:27:58.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.907 "is_configured": false, 00:27:58.907 "data_offset": 0, 00:27:58.907 "data_size": 0 00:27:58.907 } 00:27:58.907 ] 00:27:58.907 }' 00:27:58.907 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.907 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 [2024-10-07 07:48:58.526236] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:59.166 [2024-10-07 07:48:58.526410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 [2024-10-07 07:48:58.534268] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:59.166 [2024-10-07 07:48:58.534432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:59.166 [2024-10-07 07:48:58.534518] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:59.166 [2024-10-07 07:48:58.534542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:59.166 [2024-10-07 07:48:58.534551] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:59.166 [2024-10-07 07:48:58.534564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:59.166 [2024-10-07 07:48:58.534571] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:59.166 [2024-10-07 07:48:58.534584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 [2024-10-07 07:48:58.594800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:59.166 BaseBdev1 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.166 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.166 [ 00:27:59.166 { 00:27:59.166 "name": "BaseBdev1", 00:27:59.166 "aliases": [ 00:27:59.166 "00baeedf-9911-441f-b74b-361da171a15a" 00:27:59.166 ], 00:27:59.166 "product_name": "Malloc disk", 00:27:59.166 "block_size": 512, 00:27:59.166 "num_blocks": 65536, 00:27:59.166 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:27:59.166 "assigned_rate_limits": { 00:27:59.166 "rw_ios_per_sec": 0, 00:27:59.166 "rw_mbytes_per_sec": 0, 00:27:59.166 "r_mbytes_per_sec": 0, 00:27:59.166 "w_mbytes_per_sec": 0 00:27:59.166 }, 00:27:59.166 "claimed": true, 00:27:59.166 "claim_type": "exclusive_write", 00:27:59.166 "zoned": false, 00:27:59.166 "supported_io_types": { 00:27:59.166 "read": true, 00:27:59.166 "write": true, 00:27:59.166 "unmap": true, 00:27:59.166 "flush": true, 00:27:59.166 "reset": true, 00:27:59.166 "nvme_admin": false, 00:27:59.166 "nvme_io": false, 00:27:59.167 "nvme_io_md": false, 00:27:59.167 "write_zeroes": true, 00:27:59.167 "zcopy": true, 00:27:59.167 "get_zone_info": false, 00:27:59.167 "zone_management": false, 00:27:59.167 "zone_append": false, 00:27:59.167 "compare": false, 00:27:59.167 "compare_and_write": false, 00:27:59.167 "abort": true, 00:27:59.167 "seek_hole": false, 00:27:59.167 "seek_data": false, 00:27:59.167 "copy": true, 00:27:59.167 "nvme_iov_md": false 00:27:59.167 }, 00:27:59.167 "memory_domains": [ 00:27:59.167 { 00:27:59.167 "dma_device_id": "system", 00:27:59.167 "dma_device_type": 1 00:27:59.167 }, 00:27:59.167 { 00:27:59.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.167 "dma_device_type": 2 00:27:59.167 } 00:27:59.167 ], 00:27:59.167 "driver_specific": {} 00:27:59.167 } 00:27:59.167 ] 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.167 "name": "Existed_Raid", 00:27:59.167 "uuid": "70fa4644-60f3-46d5-b182-2baff6c6d360", 00:27:59.167 "strip_size_kb": 64, 00:27:59.167 "state": "configuring", 00:27:59.167 "raid_level": "raid5f", 00:27:59.167 "superblock": true, 00:27:59.167 "num_base_bdevs": 4, 00:27:59.167 "num_base_bdevs_discovered": 1, 00:27:59.167 "num_base_bdevs_operational": 4, 00:27:59.167 "base_bdevs_list": [ 00:27:59.167 { 00:27:59.167 "name": "BaseBdev1", 00:27:59.167 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:27:59.167 "is_configured": true, 00:27:59.167 "data_offset": 2048, 00:27:59.167 "data_size": 63488 00:27:59.167 }, 00:27:59.167 { 00:27:59.167 "name": "BaseBdev2", 00:27:59.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.167 "is_configured": false, 00:27:59.167 "data_offset": 0, 00:27:59.167 "data_size": 0 00:27:59.167 }, 00:27:59.167 { 00:27:59.167 "name": "BaseBdev3", 00:27:59.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.167 "is_configured": false, 00:27:59.167 "data_offset": 0, 00:27:59.167 "data_size": 0 00:27:59.167 }, 00:27:59.167 { 00:27:59.167 "name": "BaseBdev4", 00:27:59.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.167 "is_configured": false, 00:27:59.167 "data_offset": 0, 00:27:59.167 "data_size": 0 00:27:59.167 } 00:27:59.167 ] 00:27:59.167 }' 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.167 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.734 [2024-10-07 07:48:58.990961] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:59.734 [2024-10-07 07:48:58.991177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.734 07:48:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.734 [2024-10-07 07:48:58.999006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:59.734 [2024-10-07 07:48:59.001157] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:59.734 [2024-10-07 07:48:59.001201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:59.734 [2024-10-07 07:48:59.001213] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:59.734 [2024-10-07 07:48:59.001228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:59.734 [2024-10-07 07:48:59.001236] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:59.734 [2024-10-07 07:48:59.001248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.734 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.735 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.735 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.735 "name": "Existed_Raid", 00:27:59.735 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:27:59.735 "strip_size_kb": 64, 00:27:59.735 "state": "configuring", 00:27:59.735 "raid_level": "raid5f", 00:27:59.735 "superblock": true, 00:27:59.735 "num_base_bdevs": 4, 00:27:59.735 "num_base_bdevs_discovered": 1, 00:27:59.735 "num_base_bdevs_operational": 4, 00:27:59.735 "base_bdevs_list": [ 00:27:59.735 { 00:27:59.735 "name": "BaseBdev1", 00:27:59.735 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:27:59.735 "is_configured": true, 00:27:59.735 "data_offset": 2048, 00:27:59.735 "data_size": 63488 00:27:59.735 }, 00:27:59.735 { 00:27:59.735 "name": "BaseBdev2", 00:27:59.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.735 "is_configured": false, 00:27:59.735 "data_offset": 0, 00:27:59.735 "data_size": 0 00:27:59.735 }, 00:27:59.735 { 00:27:59.735 "name": "BaseBdev3", 00:27:59.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.735 "is_configured": false, 00:27:59.735 "data_offset": 0, 00:27:59.735 "data_size": 0 00:27:59.735 }, 00:27:59.735 { 00:27:59.735 "name": "BaseBdev4", 00:27:59.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.735 "is_configured": false, 00:27:59.735 "data_offset": 0, 00:27:59.735 "data_size": 0 00:27:59.735 } 00:27:59.735 ] 00:27:59.735 }' 00:27:59.735 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.735 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.994 [2024-10-07 07:48:59.453393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:59.994 BaseBdev2 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.994 [ 00:27:59.994 { 00:27:59.994 "name": "BaseBdev2", 00:27:59.994 "aliases": [ 00:27:59.994 "c7e06237-d95a-4bdd-894d-232eee818a9f" 00:27:59.994 ], 00:27:59.994 "product_name": "Malloc disk", 00:27:59.994 "block_size": 512, 00:27:59.994 "num_blocks": 65536, 00:27:59.994 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:27:59.994 "assigned_rate_limits": { 00:27:59.994 "rw_ios_per_sec": 0, 00:27:59.994 "rw_mbytes_per_sec": 0, 00:27:59.994 "r_mbytes_per_sec": 0, 00:27:59.994 "w_mbytes_per_sec": 0 00:27:59.994 }, 00:27:59.994 "claimed": true, 00:27:59.994 "claim_type": "exclusive_write", 00:27:59.994 "zoned": false, 00:27:59.994 "supported_io_types": { 00:27:59.994 "read": true, 00:27:59.994 "write": true, 00:27:59.994 "unmap": true, 00:27:59.994 "flush": true, 00:27:59.994 "reset": true, 00:27:59.994 "nvme_admin": false, 00:27:59.994 "nvme_io": false, 00:27:59.994 "nvme_io_md": false, 00:27:59.994 "write_zeroes": true, 00:27:59.994 "zcopy": true, 00:27:59.994 "get_zone_info": false, 00:27:59.994 "zone_management": false, 00:27:59.994 "zone_append": false, 00:27:59.994 "compare": false, 00:27:59.994 "compare_and_write": false, 00:27:59.994 "abort": true, 00:27:59.994 "seek_hole": false, 00:27:59.994 "seek_data": false, 00:27:59.994 "copy": true, 00:27:59.994 "nvme_iov_md": false 00:27:59.994 }, 00:27:59.994 "memory_domains": [ 00:27:59.994 { 00:27:59.994 "dma_device_id": "system", 00:27:59.994 "dma_device_type": 1 00:27:59.994 }, 00:27:59.994 { 00:27:59.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.994 "dma_device_type": 2 00:27:59.994 } 00:27:59.994 ], 00:27:59.994 "driver_specific": {} 00:27:59.994 } 00:27:59.994 ] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:27:59.994 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.994 "name": "Existed_Raid", 00:27:59.994 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:27:59.994 "strip_size_kb": 64, 00:27:59.994 "state": "configuring", 00:27:59.994 "raid_level": "raid5f", 00:27:59.994 "superblock": true, 00:27:59.994 "num_base_bdevs": 4, 00:27:59.994 "num_base_bdevs_discovered": 2, 00:27:59.994 "num_base_bdevs_operational": 4, 00:27:59.994 "base_bdevs_list": [ 00:27:59.994 { 00:27:59.994 "name": "BaseBdev1", 00:27:59.994 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:27:59.994 "is_configured": true, 00:27:59.994 "data_offset": 2048, 00:27:59.994 "data_size": 63488 00:27:59.994 }, 00:27:59.994 { 00:27:59.994 "name": "BaseBdev2", 00:27:59.994 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:27:59.994 "is_configured": true, 00:27:59.994 "data_offset": 2048, 00:27:59.994 "data_size": 63488 00:27:59.994 }, 00:27:59.994 { 00:27:59.994 "name": "BaseBdev3", 00:27:59.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.994 "is_configured": false, 00:27:59.994 "data_offset": 0, 00:27:59.994 "data_size": 0 00:27:59.994 }, 00:27:59.995 { 00:27:59.995 "name": "BaseBdev4", 00:27:59.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.995 "is_configured": false, 00:27:59.995 "data_offset": 0, 00:27:59.995 "data_size": 0 00:27:59.995 } 00:27:59.995 ] 00:27:59.995 }' 00:27:59.995 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.995 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.562 [2024-10-07 07:48:59.978431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:00.562 BaseBdev3 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:00.562 07:48:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.562 [ 00:28:00.562 { 00:28:00.562 "name": "BaseBdev3", 00:28:00.562 "aliases": [ 00:28:00.562 "5f5fa542-1f91-4427-ae53-935665367b5d" 00:28:00.562 ], 00:28:00.562 "product_name": "Malloc disk", 00:28:00.562 "block_size": 512, 00:28:00.562 "num_blocks": 65536, 00:28:00.562 "uuid": "5f5fa542-1f91-4427-ae53-935665367b5d", 00:28:00.562 "assigned_rate_limits": { 00:28:00.562 "rw_ios_per_sec": 0, 00:28:00.562 "rw_mbytes_per_sec": 0, 00:28:00.562 "r_mbytes_per_sec": 0, 00:28:00.562 "w_mbytes_per_sec": 0 00:28:00.562 }, 00:28:00.562 "claimed": true, 00:28:00.562 "claim_type": "exclusive_write", 00:28:00.562 "zoned": false, 00:28:00.562 "supported_io_types": { 00:28:00.562 "read": true, 00:28:00.562 "write": true, 00:28:00.562 "unmap": true, 00:28:00.562 "flush": true, 00:28:00.562 "reset": true, 00:28:00.562 "nvme_admin": false, 00:28:00.562 "nvme_io": false, 00:28:00.562 "nvme_io_md": false, 00:28:00.562 "write_zeroes": true, 00:28:00.562 "zcopy": true, 00:28:00.562 "get_zone_info": false, 00:28:00.562 "zone_management": false, 00:28:00.562 "zone_append": false, 00:28:00.562 "compare": false, 00:28:00.562 "compare_and_write": false, 00:28:00.562 "abort": true, 00:28:00.563 "seek_hole": false, 00:28:00.563 "seek_data": false, 00:28:00.563 "copy": true, 00:28:00.563 "nvme_iov_md": false 00:28:00.563 }, 00:28:00.563 "memory_domains": [ 00:28:00.563 { 00:28:00.563 "dma_device_id": "system", 00:28:00.563 "dma_device_type": 1 00:28:00.563 }, 00:28:00.563 { 00:28:00.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:00.563 "dma_device_type": 2 00:28:00.563 } 00:28:00.563 ], 00:28:00.563 "driver_specific": {} 00:28:00.563 } 00:28:00.563 ] 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:00.563 "name": "Existed_Raid", 00:28:00.563 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:28:00.563 "strip_size_kb": 64, 00:28:00.563 "state": "configuring", 00:28:00.563 "raid_level": "raid5f", 00:28:00.563 "superblock": true, 00:28:00.563 "num_base_bdevs": 4, 00:28:00.563 "num_base_bdevs_discovered": 3, 00:28:00.563 "num_base_bdevs_operational": 4, 00:28:00.563 "base_bdevs_list": [ 00:28:00.563 { 00:28:00.563 "name": "BaseBdev1", 00:28:00.563 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:28:00.563 "is_configured": true, 00:28:00.563 "data_offset": 2048, 00:28:00.563 "data_size": 63488 00:28:00.563 }, 00:28:00.563 { 00:28:00.563 "name": "BaseBdev2", 00:28:00.563 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:28:00.563 "is_configured": true, 00:28:00.563 "data_offset": 2048, 00:28:00.563 "data_size": 63488 00:28:00.563 }, 00:28:00.563 { 00:28:00.563 "name": "BaseBdev3", 00:28:00.563 "uuid": "5f5fa542-1f91-4427-ae53-935665367b5d", 00:28:00.563 "is_configured": true, 00:28:00.563 "data_offset": 2048, 00:28:00.563 "data_size": 63488 00:28:00.563 }, 00:28:00.563 { 00:28:00.563 "name": "BaseBdev4", 00:28:00.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.563 "is_configured": false, 00:28:00.563 "data_offset": 0, 00:28:00.563 "data_size": 0 00:28:00.563 } 00:28:00.563 ] 00:28:00.563 }' 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:00.563 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.134 [2024-10-07 07:49:00.464084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:01.134 [2024-10-07 07:49:00.464352] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:01.134 [2024-10-07 07:49:00.464373] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:01.134 BaseBdev4 00:28:01.134 [2024-10-07 07:49:00.464678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.134 [2024-10-07 07:49:00.472482] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:01.134 [2024-10-07 07:49:00.472509] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:01.134 [2024-10-07 07:49:00.472830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.134 [ 00:28:01.134 { 00:28:01.134 "name": "BaseBdev4", 00:28:01.134 "aliases": [ 00:28:01.134 "aef83694-37ab-40d0-8e77-d1007329af55" 00:28:01.134 ], 00:28:01.134 "product_name": "Malloc disk", 00:28:01.134 "block_size": 512, 00:28:01.134 "num_blocks": 65536, 00:28:01.134 "uuid": "aef83694-37ab-40d0-8e77-d1007329af55", 00:28:01.134 "assigned_rate_limits": { 00:28:01.134 "rw_ios_per_sec": 0, 00:28:01.134 "rw_mbytes_per_sec": 0, 00:28:01.134 "r_mbytes_per_sec": 0, 00:28:01.134 "w_mbytes_per_sec": 0 00:28:01.134 }, 00:28:01.134 "claimed": true, 00:28:01.134 "claim_type": "exclusive_write", 00:28:01.134 "zoned": false, 00:28:01.134 "supported_io_types": { 00:28:01.134 "read": true, 00:28:01.134 "write": true, 00:28:01.134 "unmap": true, 00:28:01.134 "flush": true, 00:28:01.134 "reset": true, 00:28:01.134 "nvme_admin": false, 00:28:01.134 "nvme_io": false, 00:28:01.134 "nvme_io_md": false, 00:28:01.134 "write_zeroes": true, 00:28:01.134 "zcopy": true, 00:28:01.134 "get_zone_info": false, 00:28:01.134 "zone_management": false, 00:28:01.134 "zone_append": false, 00:28:01.134 "compare": false, 00:28:01.134 "compare_and_write": false, 00:28:01.134 "abort": true, 00:28:01.134 "seek_hole": false, 00:28:01.134 "seek_data": false, 00:28:01.134 "copy": true, 00:28:01.134 "nvme_iov_md": false 00:28:01.134 }, 00:28:01.134 "memory_domains": [ 00:28:01.134 { 00:28:01.134 "dma_device_id": "system", 00:28:01.134 "dma_device_type": 1 00:28:01.134 }, 00:28:01.134 { 00:28:01.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.134 "dma_device_type": 2 00:28:01.134 } 00:28:01.134 ], 00:28:01.134 "driver_specific": {} 00:28:01.134 } 00:28:01.134 ] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.134 "name": "Existed_Raid", 00:28:01.134 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:28:01.134 "strip_size_kb": 64, 00:28:01.134 "state": "online", 00:28:01.134 "raid_level": "raid5f", 00:28:01.134 "superblock": true, 00:28:01.134 "num_base_bdevs": 4, 00:28:01.134 "num_base_bdevs_discovered": 4, 00:28:01.134 "num_base_bdevs_operational": 4, 00:28:01.134 "base_bdevs_list": [ 00:28:01.134 { 00:28:01.134 "name": "BaseBdev1", 00:28:01.134 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:28:01.134 "is_configured": true, 00:28:01.134 "data_offset": 2048, 00:28:01.134 "data_size": 63488 00:28:01.134 }, 00:28:01.134 { 00:28:01.134 "name": "BaseBdev2", 00:28:01.134 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:28:01.134 "is_configured": true, 00:28:01.134 "data_offset": 2048, 00:28:01.134 "data_size": 63488 00:28:01.134 }, 00:28:01.134 { 00:28:01.134 "name": "BaseBdev3", 00:28:01.134 "uuid": "5f5fa542-1f91-4427-ae53-935665367b5d", 00:28:01.134 "is_configured": true, 00:28:01.134 "data_offset": 2048, 00:28:01.134 "data_size": 63488 00:28:01.134 }, 00:28:01.134 { 00:28:01.134 "name": "BaseBdev4", 00:28:01.134 "uuid": "aef83694-37ab-40d0-8e77-d1007329af55", 00:28:01.134 "is_configured": true, 00:28:01.134 "data_offset": 2048, 00:28:01.134 "data_size": 63488 00:28:01.134 } 00:28:01.134 ] 00:28:01.134 }' 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.134 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.394 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:01.394 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:01.654 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:01.654 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.655 [2024-10-07 07:49:00.965742] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:01.655 "name": "Existed_Raid", 00:28:01.655 "aliases": [ 00:28:01.655 "d7bec155-98a0-4975-882f-1b24eb9c6386" 00:28:01.655 ], 00:28:01.655 "product_name": "Raid Volume", 00:28:01.655 "block_size": 512, 00:28:01.655 "num_blocks": 190464, 00:28:01.655 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:28:01.655 "assigned_rate_limits": { 00:28:01.655 "rw_ios_per_sec": 0, 00:28:01.655 "rw_mbytes_per_sec": 0, 00:28:01.655 "r_mbytes_per_sec": 0, 00:28:01.655 "w_mbytes_per_sec": 0 00:28:01.655 }, 00:28:01.655 "claimed": false, 00:28:01.655 "zoned": false, 00:28:01.655 "supported_io_types": { 00:28:01.655 "read": true, 00:28:01.655 "write": true, 00:28:01.655 "unmap": false, 00:28:01.655 "flush": false, 00:28:01.655 "reset": true, 00:28:01.655 "nvme_admin": false, 00:28:01.655 "nvme_io": false, 00:28:01.655 "nvme_io_md": false, 00:28:01.655 "write_zeroes": true, 00:28:01.655 "zcopy": false, 00:28:01.655 "get_zone_info": false, 00:28:01.655 "zone_management": false, 00:28:01.655 "zone_append": false, 00:28:01.655 "compare": false, 00:28:01.655 "compare_and_write": false, 00:28:01.655 "abort": false, 00:28:01.655 "seek_hole": false, 00:28:01.655 "seek_data": false, 00:28:01.655 "copy": false, 00:28:01.655 "nvme_iov_md": false 00:28:01.655 }, 00:28:01.655 "driver_specific": { 00:28:01.655 "raid": { 00:28:01.655 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:28:01.655 "strip_size_kb": 64, 00:28:01.655 "state": "online", 00:28:01.655 "raid_level": "raid5f", 00:28:01.655 "superblock": true, 00:28:01.655 "num_base_bdevs": 4, 00:28:01.655 "num_base_bdevs_discovered": 4, 00:28:01.655 "num_base_bdevs_operational": 4, 00:28:01.655 "base_bdevs_list": [ 00:28:01.655 { 00:28:01.655 "name": "BaseBdev1", 00:28:01.655 "uuid": "00baeedf-9911-441f-b74b-361da171a15a", 00:28:01.655 "is_configured": true, 00:28:01.655 "data_offset": 2048, 00:28:01.655 "data_size": 63488 00:28:01.655 }, 00:28:01.655 { 00:28:01.655 "name": "BaseBdev2", 00:28:01.655 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:28:01.655 "is_configured": true, 00:28:01.655 "data_offset": 2048, 00:28:01.655 "data_size": 63488 00:28:01.655 }, 00:28:01.655 { 00:28:01.655 "name": "BaseBdev3", 00:28:01.655 "uuid": "5f5fa542-1f91-4427-ae53-935665367b5d", 00:28:01.655 "is_configured": true, 00:28:01.655 "data_offset": 2048, 00:28:01.655 "data_size": 63488 00:28:01.655 }, 00:28:01.655 { 00:28:01.655 "name": "BaseBdev4", 00:28:01.655 "uuid": "aef83694-37ab-40d0-8e77-d1007329af55", 00:28:01.655 "is_configured": true, 00:28:01.655 "data_offset": 2048, 00:28:01.655 "data_size": 63488 00:28:01.655 } 00:28:01.655 ] 00:28:01.655 } 00:28:01.655 } 00:28:01.655 }' 00:28:01.655 07:49:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:01.655 BaseBdev2 00:28:01.655 BaseBdev3 00:28:01.655 BaseBdev4' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.655 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 [2024-10-07 07:49:01.277637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.916 "name": "Existed_Raid", 00:28:01.916 "uuid": "d7bec155-98a0-4975-882f-1b24eb9c6386", 00:28:01.916 "strip_size_kb": 64, 00:28:01.916 "state": "online", 00:28:01.916 "raid_level": "raid5f", 00:28:01.916 "superblock": true, 00:28:01.916 "num_base_bdevs": 4, 00:28:01.916 "num_base_bdevs_discovered": 3, 00:28:01.916 "num_base_bdevs_operational": 3, 00:28:01.916 "base_bdevs_list": [ 00:28:01.916 { 00:28:01.916 "name": null, 00:28:01.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.916 "is_configured": false, 00:28:01.916 "data_offset": 0, 00:28:01.916 "data_size": 63488 00:28:01.916 }, 00:28:01.916 { 00:28:01.916 "name": "BaseBdev2", 00:28:01.916 "uuid": "c7e06237-d95a-4bdd-894d-232eee818a9f", 00:28:01.916 "is_configured": true, 00:28:01.916 "data_offset": 2048, 00:28:01.916 "data_size": 63488 00:28:01.916 }, 00:28:01.916 { 00:28:01.916 "name": "BaseBdev3", 00:28:01.916 "uuid": "5f5fa542-1f91-4427-ae53-935665367b5d", 00:28:01.916 "is_configured": true, 00:28:01.916 "data_offset": 2048, 00:28:01.916 "data_size": 63488 00:28:01.916 }, 00:28:01.916 { 00:28:01.916 "name": "BaseBdev4", 00:28:01.916 "uuid": "aef83694-37ab-40d0-8e77-d1007329af55", 00:28:01.916 "is_configured": true, 00:28:01.916 "data_offset": 2048, 00:28:01.916 "data_size": 63488 00:28:01.916 } 00:28:01.916 ] 00:28:01.916 }' 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.916 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.484 07:49:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.484 [2024-10-07 07:49:01.894669] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:02.484 [2024-10-07 07:49:01.895023] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.484 [2024-10-07 07:49:02.004803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.484 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.744 [2024-10-07 07:49:02.060867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:02.744 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.744 [2024-10-07 07:49:02.227127] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:02.744 [2024-10-07 07:49:02.227315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.004 BaseBdev2 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.004 [ 00:28:03.004 { 00:28:03.004 "name": "BaseBdev2", 00:28:03.004 "aliases": [ 00:28:03.004 "c1091770-24b1-4e94-a0d9-efba64e68fe5" 00:28:03.004 ], 00:28:03.004 "product_name": "Malloc disk", 00:28:03.004 "block_size": 512, 00:28:03.004 "num_blocks": 65536, 00:28:03.004 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:03.004 "assigned_rate_limits": { 00:28:03.004 "rw_ios_per_sec": 0, 00:28:03.004 "rw_mbytes_per_sec": 0, 00:28:03.004 "r_mbytes_per_sec": 0, 00:28:03.004 "w_mbytes_per_sec": 0 00:28:03.004 }, 00:28:03.004 "claimed": false, 00:28:03.004 "zoned": false, 00:28:03.004 "supported_io_types": { 00:28:03.004 "read": true, 00:28:03.004 "write": true, 00:28:03.004 "unmap": true, 00:28:03.004 "flush": true, 00:28:03.004 "reset": true, 00:28:03.004 "nvme_admin": false, 00:28:03.004 "nvme_io": false, 00:28:03.004 "nvme_io_md": false, 00:28:03.004 "write_zeroes": true, 00:28:03.004 "zcopy": true, 00:28:03.004 "get_zone_info": false, 00:28:03.004 "zone_management": false, 00:28:03.004 "zone_append": false, 00:28:03.004 "compare": false, 00:28:03.004 "compare_and_write": false, 00:28:03.004 "abort": true, 00:28:03.004 "seek_hole": false, 00:28:03.004 "seek_data": false, 00:28:03.004 "copy": true, 00:28:03.004 "nvme_iov_md": false 00:28:03.004 }, 00:28:03.004 "memory_domains": [ 00:28:03.004 { 00:28:03.004 "dma_device_id": "system", 00:28:03.004 "dma_device_type": 1 00:28:03.004 }, 00:28:03.004 { 00:28:03.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.004 "dma_device_type": 2 00:28:03.004 } 00:28:03.004 ], 00:28:03.004 "driver_specific": {} 00:28:03.004 } 00:28:03.004 ] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.004 BaseBdev3 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev3 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.004 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.005 [ 00:28:03.005 { 00:28:03.005 "name": "BaseBdev3", 00:28:03.005 "aliases": [ 00:28:03.005 "a86620b0-eff5-4694-8810-0c02f9021607" 00:28:03.005 ], 00:28:03.005 "product_name": "Malloc disk", 00:28:03.005 "block_size": 512, 00:28:03.005 "num_blocks": 65536, 00:28:03.005 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:03.005 "assigned_rate_limits": { 00:28:03.005 "rw_ios_per_sec": 0, 00:28:03.005 "rw_mbytes_per_sec": 0, 00:28:03.005 "r_mbytes_per_sec": 0, 00:28:03.005 "w_mbytes_per_sec": 0 00:28:03.005 }, 00:28:03.005 "claimed": false, 00:28:03.005 "zoned": false, 00:28:03.005 "supported_io_types": { 00:28:03.005 "read": true, 00:28:03.005 "write": true, 00:28:03.005 "unmap": true, 00:28:03.005 "flush": true, 00:28:03.005 "reset": true, 00:28:03.005 "nvme_admin": false, 00:28:03.005 "nvme_io": false, 00:28:03.005 "nvme_io_md": false, 00:28:03.005 "write_zeroes": true, 00:28:03.005 "zcopy": true, 00:28:03.005 "get_zone_info": false, 00:28:03.005 "zone_management": false, 00:28:03.005 "zone_append": false, 00:28:03.005 "compare": false, 00:28:03.005 "compare_and_write": false, 00:28:03.005 "abort": true, 00:28:03.005 "seek_hole": false, 00:28:03.005 "seek_data": false, 00:28:03.005 "copy": true, 00:28:03.005 "nvme_iov_md": false 00:28:03.005 }, 00:28:03.005 "memory_domains": [ 00:28:03.005 { 00:28:03.005 "dma_device_id": "system", 00:28:03.005 "dma_device_type": 1 00:28:03.005 }, 00:28:03.005 { 00:28:03.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.005 "dma_device_type": 2 00:28:03.005 } 00:28:03.005 ], 00:28:03.005 "driver_specific": {} 00:28:03.005 } 00:28:03.005 ] 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.005 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.264 BaseBdev4 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev4 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.264 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.264 [ 00:28:03.264 { 00:28:03.264 "name": "BaseBdev4", 00:28:03.264 "aliases": [ 00:28:03.264 "c73bd6d8-0a85-43c4-8bab-185e4fd4087f" 00:28:03.264 ], 00:28:03.264 "product_name": "Malloc disk", 00:28:03.264 "block_size": 512, 00:28:03.264 "num_blocks": 65536, 00:28:03.264 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:03.264 "assigned_rate_limits": { 00:28:03.264 "rw_ios_per_sec": 0, 00:28:03.264 "rw_mbytes_per_sec": 0, 00:28:03.264 "r_mbytes_per_sec": 0, 00:28:03.265 "w_mbytes_per_sec": 0 00:28:03.265 }, 00:28:03.265 "claimed": false, 00:28:03.265 "zoned": false, 00:28:03.265 "supported_io_types": { 00:28:03.265 "read": true, 00:28:03.265 "write": true, 00:28:03.265 "unmap": true, 00:28:03.265 "flush": true, 00:28:03.265 "reset": true, 00:28:03.265 "nvme_admin": false, 00:28:03.265 "nvme_io": false, 00:28:03.265 "nvme_io_md": false, 00:28:03.265 "write_zeroes": true, 00:28:03.265 "zcopy": true, 00:28:03.265 "get_zone_info": false, 00:28:03.265 "zone_management": false, 00:28:03.265 "zone_append": false, 00:28:03.265 "compare": false, 00:28:03.265 "compare_and_write": false, 00:28:03.265 "abort": true, 00:28:03.265 "seek_hole": false, 00:28:03.265 "seek_data": false, 00:28:03.265 "copy": true, 00:28:03.265 "nvme_iov_md": false 00:28:03.265 }, 00:28:03.265 "memory_domains": [ 00:28:03.265 { 00:28:03.265 "dma_device_id": "system", 00:28:03.265 "dma_device_type": 1 00:28:03.265 }, 00:28:03.265 { 00:28:03.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.265 "dma_device_type": 2 00:28:03.265 } 00:28:03.265 ], 00:28:03.265 "driver_specific": {} 00:28:03.265 } 00:28:03.265 ] 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.265 [2024-10-07 07:49:02.635019] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:03.265 [2024-10-07 07:49:02.635063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:03.265 [2024-10-07 07:49:02.635086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:03.265 [2024-10-07 07:49:02.637321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:03.265 [2024-10-07 07:49:02.637373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.265 "name": "Existed_Raid", 00:28:03.265 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:03.265 "strip_size_kb": 64, 00:28:03.265 "state": "configuring", 00:28:03.265 "raid_level": "raid5f", 00:28:03.265 "superblock": true, 00:28:03.265 "num_base_bdevs": 4, 00:28:03.265 "num_base_bdevs_discovered": 3, 00:28:03.265 "num_base_bdevs_operational": 4, 00:28:03.265 "base_bdevs_list": [ 00:28:03.265 { 00:28:03.265 "name": "BaseBdev1", 00:28:03.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.265 "is_configured": false, 00:28:03.265 "data_offset": 0, 00:28:03.265 "data_size": 0 00:28:03.265 }, 00:28:03.265 { 00:28:03.265 "name": "BaseBdev2", 00:28:03.265 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:03.265 "is_configured": true, 00:28:03.265 "data_offset": 2048, 00:28:03.265 "data_size": 63488 00:28:03.265 }, 00:28:03.265 { 00:28:03.265 "name": "BaseBdev3", 00:28:03.265 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:03.265 "is_configured": true, 00:28:03.265 "data_offset": 2048, 00:28:03.265 "data_size": 63488 00:28:03.265 }, 00:28:03.265 { 00:28:03.265 "name": "BaseBdev4", 00:28:03.265 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:03.265 "is_configured": true, 00:28:03.265 "data_offset": 2048, 00:28:03.265 "data_size": 63488 00:28:03.265 } 00:28:03.265 ] 00:28:03.265 }' 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.265 07:49:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.834 [2024-10-07 07:49:03.099137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.834 "name": "Existed_Raid", 00:28:03.834 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:03.834 "strip_size_kb": 64, 00:28:03.834 "state": "configuring", 00:28:03.834 "raid_level": "raid5f", 00:28:03.834 "superblock": true, 00:28:03.834 "num_base_bdevs": 4, 00:28:03.834 "num_base_bdevs_discovered": 2, 00:28:03.834 "num_base_bdevs_operational": 4, 00:28:03.834 "base_bdevs_list": [ 00:28:03.834 { 00:28:03.834 "name": "BaseBdev1", 00:28:03.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.834 "is_configured": false, 00:28:03.834 "data_offset": 0, 00:28:03.834 "data_size": 0 00:28:03.834 }, 00:28:03.834 { 00:28:03.834 "name": null, 00:28:03.834 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:03.834 "is_configured": false, 00:28:03.834 "data_offset": 0, 00:28:03.834 "data_size": 63488 00:28:03.834 }, 00:28:03.834 { 00:28:03.834 "name": "BaseBdev3", 00:28:03.834 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:03.834 "is_configured": true, 00:28:03.834 "data_offset": 2048, 00:28:03.834 "data_size": 63488 00:28:03.834 }, 00:28:03.834 { 00:28:03.834 "name": "BaseBdev4", 00:28:03.834 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:03.834 "is_configured": true, 00:28:03.834 "data_offset": 2048, 00:28:03.834 "data_size": 63488 00:28:03.834 } 00:28:03.834 ] 00:28:03.834 }' 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.834 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:04.093 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.094 [2024-10-07 07:49:03.646943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.094 BaseBdev1 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.094 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.353 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.353 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:04.353 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.353 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.353 [ 00:28:04.353 { 00:28:04.353 "name": "BaseBdev1", 00:28:04.353 "aliases": [ 00:28:04.353 "002aec7b-da3b-46fb-a44d-59496c5835de" 00:28:04.353 ], 00:28:04.353 "product_name": "Malloc disk", 00:28:04.353 "block_size": 512, 00:28:04.353 "num_blocks": 65536, 00:28:04.353 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:04.353 "assigned_rate_limits": { 00:28:04.353 "rw_ios_per_sec": 0, 00:28:04.353 "rw_mbytes_per_sec": 0, 00:28:04.354 "r_mbytes_per_sec": 0, 00:28:04.354 "w_mbytes_per_sec": 0 00:28:04.354 }, 00:28:04.354 "claimed": true, 00:28:04.354 "claim_type": "exclusive_write", 00:28:04.354 "zoned": false, 00:28:04.354 "supported_io_types": { 00:28:04.354 "read": true, 00:28:04.354 "write": true, 00:28:04.354 "unmap": true, 00:28:04.354 "flush": true, 00:28:04.354 "reset": true, 00:28:04.354 "nvme_admin": false, 00:28:04.354 "nvme_io": false, 00:28:04.354 "nvme_io_md": false, 00:28:04.354 "write_zeroes": true, 00:28:04.354 "zcopy": true, 00:28:04.354 "get_zone_info": false, 00:28:04.354 "zone_management": false, 00:28:04.354 "zone_append": false, 00:28:04.354 "compare": false, 00:28:04.354 "compare_and_write": false, 00:28:04.354 "abort": true, 00:28:04.354 "seek_hole": false, 00:28:04.354 "seek_data": false, 00:28:04.354 "copy": true, 00:28:04.354 "nvme_iov_md": false 00:28:04.354 }, 00:28:04.354 "memory_domains": [ 00:28:04.354 { 00:28:04.354 "dma_device_id": "system", 00:28:04.354 "dma_device_type": 1 00:28:04.354 }, 00:28:04.354 { 00:28:04.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.354 "dma_device_type": 2 00:28:04.354 } 00:28:04.354 ], 00:28:04.354 "driver_specific": {} 00:28:04.354 } 00:28:04.354 ] 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.354 "name": "Existed_Raid", 00:28:04.354 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:04.354 "strip_size_kb": 64, 00:28:04.354 "state": "configuring", 00:28:04.354 "raid_level": "raid5f", 00:28:04.354 "superblock": true, 00:28:04.354 "num_base_bdevs": 4, 00:28:04.354 "num_base_bdevs_discovered": 3, 00:28:04.354 "num_base_bdevs_operational": 4, 00:28:04.354 "base_bdevs_list": [ 00:28:04.354 { 00:28:04.354 "name": "BaseBdev1", 00:28:04.354 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:04.354 "is_configured": true, 00:28:04.354 "data_offset": 2048, 00:28:04.354 "data_size": 63488 00:28:04.354 }, 00:28:04.354 { 00:28:04.354 "name": null, 00:28:04.354 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:04.354 "is_configured": false, 00:28:04.354 "data_offset": 0, 00:28:04.354 "data_size": 63488 00:28:04.354 }, 00:28:04.354 { 00:28:04.354 "name": "BaseBdev3", 00:28:04.354 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:04.354 "is_configured": true, 00:28:04.354 "data_offset": 2048, 00:28:04.354 "data_size": 63488 00:28:04.354 }, 00:28:04.354 { 00:28:04.354 "name": "BaseBdev4", 00:28:04.354 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:04.354 "is_configured": true, 00:28:04.354 "data_offset": 2048, 00:28:04.354 "data_size": 63488 00:28:04.354 } 00:28:04.354 ] 00:28:04.354 }' 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.354 07:49:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.613 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:04.613 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.613 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.613 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.614 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.873 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:04.873 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:04.873 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.873 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.873 [2024-10-07 07:49:04.183201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.874 "name": "Existed_Raid", 00:28:04.874 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:04.874 "strip_size_kb": 64, 00:28:04.874 "state": "configuring", 00:28:04.874 "raid_level": "raid5f", 00:28:04.874 "superblock": true, 00:28:04.874 "num_base_bdevs": 4, 00:28:04.874 "num_base_bdevs_discovered": 2, 00:28:04.874 "num_base_bdevs_operational": 4, 00:28:04.874 "base_bdevs_list": [ 00:28:04.874 { 00:28:04.874 "name": "BaseBdev1", 00:28:04.874 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:04.874 "is_configured": true, 00:28:04.874 "data_offset": 2048, 00:28:04.874 "data_size": 63488 00:28:04.874 }, 00:28:04.874 { 00:28:04.874 "name": null, 00:28:04.874 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:04.874 "is_configured": false, 00:28:04.874 "data_offset": 0, 00:28:04.874 "data_size": 63488 00:28:04.874 }, 00:28:04.874 { 00:28:04.874 "name": null, 00:28:04.874 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:04.874 "is_configured": false, 00:28:04.874 "data_offset": 0, 00:28:04.874 "data_size": 63488 00:28:04.874 }, 00:28:04.874 { 00:28:04.874 "name": "BaseBdev4", 00:28:04.874 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:04.874 "is_configured": true, 00:28:04.874 "data_offset": 2048, 00:28:04.874 "data_size": 63488 00:28:04.874 } 00:28:04.874 ] 00:28:04.874 }' 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.874 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.133 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.133 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.133 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.133 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:05.133 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.393 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:05.393 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:05.393 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.393 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.393 [2024-10-07 07:49:04.707317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:05.393 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.394 "name": "Existed_Raid", 00:28:05.394 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:05.394 "strip_size_kb": 64, 00:28:05.394 "state": "configuring", 00:28:05.394 "raid_level": "raid5f", 00:28:05.394 "superblock": true, 00:28:05.394 "num_base_bdevs": 4, 00:28:05.394 "num_base_bdevs_discovered": 3, 00:28:05.394 "num_base_bdevs_operational": 4, 00:28:05.394 "base_bdevs_list": [ 00:28:05.394 { 00:28:05.394 "name": "BaseBdev1", 00:28:05.394 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:05.394 "is_configured": true, 00:28:05.394 "data_offset": 2048, 00:28:05.394 "data_size": 63488 00:28:05.394 }, 00:28:05.394 { 00:28:05.394 "name": null, 00:28:05.394 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:05.394 "is_configured": false, 00:28:05.394 "data_offset": 0, 00:28:05.394 "data_size": 63488 00:28:05.394 }, 00:28:05.394 { 00:28:05.394 "name": "BaseBdev3", 00:28:05.394 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:05.394 "is_configured": true, 00:28:05.394 "data_offset": 2048, 00:28:05.394 "data_size": 63488 00:28:05.394 }, 00:28:05.394 { 00:28:05.394 "name": "BaseBdev4", 00:28:05.394 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:05.394 "is_configured": true, 00:28:05.394 "data_offset": 2048, 00:28:05.394 "data_size": 63488 00:28:05.394 } 00:28:05.394 ] 00:28:05.394 }' 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.394 07:49:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.654 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.654 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.654 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.654 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:05.654 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.913 [2024-10-07 07:49:05.231485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:05.913 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.913 "name": "Existed_Raid", 00:28:05.913 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:05.914 "strip_size_kb": 64, 00:28:05.914 "state": "configuring", 00:28:05.914 "raid_level": "raid5f", 00:28:05.914 "superblock": true, 00:28:05.914 "num_base_bdevs": 4, 00:28:05.914 "num_base_bdevs_discovered": 2, 00:28:05.914 "num_base_bdevs_operational": 4, 00:28:05.914 "base_bdevs_list": [ 00:28:05.914 { 00:28:05.914 "name": null, 00:28:05.914 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:05.914 "is_configured": false, 00:28:05.914 "data_offset": 0, 00:28:05.914 "data_size": 63488 00:28:05.914 }, 00:28:05.914 { 00:28:05.914 "name": null, 00:28:05.914 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:05.914 "is_configured": false, 00:28:05.914 "data_offset": 0, 00:28:05.914 "data_size": 63488 00:28:05.914 }, 00:28:05.914 { 00:28:05.914 "name": "BaseBdev3", 00:28:05.914 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:05.914 "is_configured": true, 00:28:05.914 "data_offset": 2048, 00:28:05.914 "data_size": 63488 00:28:05.914 }, 00:28:05.914 { 00:28:05.914 "name": "BaseBdev4", 00:28:05.914 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:05.914 "is_configured": true, 00:28:05.914 "data_offset": 2048, 00:28:05.914 "data_size": 63488 00:28:05.914 } 00:28:05.914 ] 00:28:05.914 }' 00:28:05.914 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.914 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.483 [2024-10-07 07:49:05.851074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.483 "name": "Existed_Raid", 00:28:06.483 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:06.483 "strip_size_kb": 64, 00:28:06.483 "state": "configuring", 00:28:06.483 "raid_level": "raid5f", 00:28:06.483 "superblock": true, 00:28:06.483 "num_base_bdevs": 4, 00:28:06.483 "num_base_bdevs_discovered": 3, 00:28:06.483 "num_base_bdevs_operational": 4, 00:28:06.483 "base_bdevs_list": [ 00:28:06.483 { 00:28:06.483 "name": null, 00:28:06.483 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:06.483 "is_configured": false, 00:28:06.483 "data_offset": 0, 00:28:06.483 "data_size": 63488 00:28:06.483 }, 00:28:06.483 { 00:28:06.483 "name": "BaseBdev2", 00:28:06.483 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:06.483 "is_configured": true, 00:28:06.483 "data_offset": 2048, 00:28:06.483 "data_size": 63488 00:28:06.483 }, 00:28:06.483 { 00:28:06.483 "name": "BaseBdev3", 00:28:06.483 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:06.483 "is_configured": true, 00:28:06.483 "data_offset": 2048, 00:28:06.483 "data_size": 63488 00:28:06.483 }, 00:28:06.483 { 00:28:06.483 "name": "BaseBdev4", 00:28:06.483 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:06.483 "is_configured": true, 00:28:06.483 "data_offset": 2048, 00:28:06.483 "data_size": 63488 00:28:06.483 } 00:28:06.483 ] 00:28:06.483 }' 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.483 07:49:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.742 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:06.742 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.742 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 002aec7b-da3b-46fb-a44d-59496c5835de 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.003 [2024-10-07 07:49:06.432174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:07.003 [2024-10-07 07:49:06.432813] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:07.003 [2024-10-07 07:49:06.432844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:07.003 NewBaseBdev 00:28:07.003 [2024-10-07 07:49:06.433236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_name=NewBaseBdev 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local i 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.003 [2024-10-07 07:49:06.444451] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:07.003 [2024-10-07 07:49:06.444515] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:07.003 [2024-10-07 07:49:06.444946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.003 [ 00:28:07.003 { 00:28:07.003 "name": "NewBaseBdev", 00:28:07.003 "aliases": [ 00:28:07.003 "002aec7b-da3b-46fb-a44d-59496c5835de" 00:28:07.003 ], 00:28:07.003 "product_name": "Malloc disk", 00:28:07.003 "block_size": 512, 00:28:07.003 "num_blocks": 65536, 00:28:07.003 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:07.003 "assigned_rate_limits": { 00:28:07.003 "rw_ios_per_sec": 0, 00:28:07.003 "rw_mbytes_per_sec": 0, 00:28:07.003 "r_mbytes_per_sec": 0, 00:28:07.003 "w_mbytes_per_sec": 0 00:28:07.003 }, 00:28:07.003 "claimed": true, 00:28:07.003 "claim_type": "exclusive_write", 00:28:07.003 "zoned": false, 00:28:07.003 "supported_io_types": { 00:28:07.003 "read": true, 00:28:07.003 "write": true, 00:28:07.003 "unmap": true, 00:28:07.003 "flush": true, 00:28:07.003 "reset": true, 00:28:07.003 "nvme_admin": false, 00:28:07.003 "nvme_io": false, 00:28:07.003 "nvme_io_md": false, 00:28:07.003 "write_zeroes": true, 00:28:07.003 "zcopy": true, 00:28:07.003 "get_zone_info": false, 00:28:07.003 "zone_management": false, 00:28:07.003 "zone_append": false, 00:28:07.003 "compare": false, 00:28:07.003 "compare_and_write": false, 00:28:07.003 "abort": true, 00:28:07.003 "seek_hole": false, 00:28:07.003 "seek_data": false, 00:28:07.003 "copy": true, 00:28:07.003 "nvme_iov_md": false 00:28:07.003 }, 00:28:07.003 "memory_domains": [ 00:28:07.003 { 00:28:07.003 "dma_device_id": "system", 00:28:07.003 "dma_device_type": 1 00:28:07.003 }, 00:28:07.003 { 00:28:07.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.003 "dma_device_type": 2 00:28:07.003 } 00:28:07.003 ], 00:28:07.003 "driver_specific": {} 00:28:07.003 } 00:28:07.003 ] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # return 0 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.003 "name": "Existed_Raid", 00:28:07.003 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:07.003 "strip_size_kb": 64, 00:28:07.003 "state": "online", 00:28:07.003 "raid_level": "raid5f", 00:28:07.003 "superblock": true, 00:28:07.003 "num_base_bdevs": 4, 00:28:07.003 "num_base_bdevs_discovered": 4, 00:28:07.003 "num_base_bdevs_operational": 4, 00:28:07.003 "base_bdevs_list": [ 00:28:07.003 { 00:28:07.003 "name": "NewBaseBdev", 00:28:07.003 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:07.003 "is_configured": true, 00:28:07.003 "data_offset": 2048, 00:28:07.003 "data_size": 63488 00:28:07.003 }, 00:28:07.003 { 00:28:07.003 "name": "BaseBdev2", 00:28:07.003 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:07.003 "is_configured": true, 00:28:07.003 "data_offset": 2048, 00:28:07.003 "data_size": 63488 00:28:07.003 }, 00:28:07.003 { 00:28:07.003 "name": "BaseBdev3", 00:28:07.003 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:07.003 "is_configured": true, 00:28:07.003 "data_offset": 2048, 00:28:07.003 "data_size": 63488 00:28:07.003 }, 00:28:07.003 { 00:28:07.003 "name": "BaseBdev4", 00:28:07.003 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:07.003 "is_configured": true, 00:28:07.003 "data_offset": 2048, 00:28:07.003 "data_size": 63488 00:28:07.003 } 00:28:07.003 ] 00:28:07.003 }' 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.003 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.572 [2024-10-07 07:49:06.950067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.572 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.572 "name": "Existed_Raid", 00:28:07.572 "aliases": [ 00:28:07.572 "450d57bc-385b-4ed2-9c25-bedc59b3f35a" 00:28:07.572 ], 00:28:07.572 "product_name": "Raid Volume", 00:28:07.572 "block_size": 512, 00:28:07.572 "num_blocks": 190464, 00:28:07.572 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:07.572 "assigned_rate_limits": { 00:28:07.572 "rw_ios_per_sec": 0, 00:28:07.572 "rw_mbytes_per_sec": 0, 00:28:07.572 "r_mbytes_per_sec": 0, 00:28:07.572 "w_mbytes_per_sec": 0 00:28:07.572 }, 00:28:07.572 "claimed": false, 00:28:07.572 "zoned": false, 00:28:07.572 "supported_io_types": { 00:28:07.572 "read": true, 00:28:07.572 "write": true, 00:28:07.572 "unmap": false, 00:28:07.572 "flush": false, 00:28:07.572 "reset": true, 00:28:07.572 "nvme_admin": false, 00:28:07.572 "nvme_io": false, 00:28:07.572 "nvme_io_md": false, 00:28:07.572 "write_zeroes": true, 00:28:07.572 "zcopy": false, 00:28:07.572 "get_zone_info": false, 00:28:07.572 "zone_management": false, 00:28:07.572 "zone_append": false, 00:28:07.572 "compare": false, 00:28:07.572 "compare_and_write": false, 00:28:07.572 "abort": false, 00:28:07.572 "seek_hole": false, 00:28:07.572 "seek_data": false, 00:28:07.572 "copy": false, 00:28:07.572 "nvme_iov_md": false 00:28:07.572 }, 00:28:07.572 "driver_specific": { 00:28:07.572 "raid": { 00:28:07.572 "uuid": "450d57bc-385b-4ed2-9c25-bedc59b3f35a", 00:28:07.572 "strip_size_kb": 64, 00:28:07.572 "state": "online", 00:28:07.572 "raid_level": "raid5f", 00:28:07.572 "superblock": true, 00:28:07.572 "num_base_bdevs": 4, 00:28:07.572 "num_base_bdevs_discovered": 4, 00:28:07.572 "num_base_bdevs_operational": 4, 00:28:07.572 "base_bdevs_list": [ 00:28:07.572 { 00:28:07.572 "name": "NewBaseBdev", 00:28:07.572 "uuid": "002aec7b-da3b-46fb-a44d-59496c5835de", 00:28:07.572 "is_configured": true, 00:28:07.572 "data_offset": 2048, 00:28:07.572 "data_size": 63488 00:28:07.572 }, 00:28:07.572 { 00:28:07.572 "name": "BaseBdev2", 00:28:07.572 "uuid": "c1091770-24b1-4e94-a0d9-efba64e68fe5", 00:28:07.572 "is_configured": true, 00:28:07.572 "data_offset": 2048, 00:28:07.572 "data_size": 63488 00:28:07.572 }, 00:28:07.572 { 00:28:07.572 "name": "BaseBdev3", 00:28:07.572 "uuid": "a86620b0-eff5-4694-8810-0c02f9021607", 00:28:07.572 "is_configured": true, 00:28:07.572 "data_offset": 2048, 00:28:07.572 "data_size": 63488 00:28:07.572 }, 00:28:07.572 { 00:28:07.572 "name": "BaseBdev4", 00:28:07.572 "uuid": "c73bd6d8-0a85-43c4-8bab-185e4fd4087f", 00:28:07.572 "is_configured": true, 00:28:07.572 "data_offset": 2048, 00:28:07.572 "data_size": 63488 00:28:07.572 } 00:28:07.572 ] 00:28:07.572 } 00:28:07.573 } 00:28:07.573 }' 00:28:07.573 07:49:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:07.573 BaseBdev2 00:28:07.573 BaseBdev3 00:28:07.573 BaseBdev4' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.573 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.832 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.833 [2024-10-07 07:49:07.265931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:07.833 [2024-10-07 07:49:07.266110] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:07.833 [2024-10-07 07:49:07.266222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:07.833 [2024-10-07 07:49:07.266535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:07.833 [2024-10-07 07:49:07.266550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83694 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' -z 83694 ']' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # kill -0 83694 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # uname 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 83694 00:28:07.833 killing process with pid 83694 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 83694' 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # kill 83694 00:28:07.833 [2024-10-07 07:49:07.309461] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:07.833 07:49:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@977 -- # wait 83694 00:28:08.402 [2024-10-07 07:49:07.745241] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:09.795 07:49:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:09.795 00:28:09.795 real 0m11.857s 00:28:09.795 user 0m18.567s 00:28:09.795 sys 0m2.262s 00:28:09.795 07:49:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:28:09.795 07:49:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.795 ************************************ 00:28:09.795 END TEST raid5f_state_function_test_sb 00:28:09.795 ************************************ 00:28:09.795 07:49:09 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:28:09.795 07:49:09 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:28:09.795 07:49:09 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:28:09.795 07:49:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.795 ************************************ 00:28:09.795 START TEST raid5f_superblock_test 00:28:09.795 ************************************ 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # raid_superblock_test raid5f 4 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84365 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84365 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # '[' -z 84365 ']' 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:28:09.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:28:09.795 07:49:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.795 [2024-10-07 07:49:09.283230] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:09.795 [2024-10-07 07:49:09.283413] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84365 ] 00:28:10.055 [2024-10-07 07:49:09.474341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.315 [2024-10-07 07:49:09.781381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.575 [2024-10-07 07:49:10.007022] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:10.575 [2024-10-07 07:49:10.007064] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@867 -- # return 0 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.834 malloc1 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.834 [2024-10-07 07:49:10.236605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:10.834 [2024-10-07 07:49:10.236690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.834 [2024-10-07 07:49:10.236745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:10.834 [2024-10-07 07:49:10.236769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.834 [2024-10-07 07:49:10.239405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.834 [2024-10-07 07:49:10.239452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:10.834 pt1 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:10.834 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.835 malloc2 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.835 [2024-10-07 07:49:10.303392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:10.835 [2024-10-07 07:49:10.303583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.835 [2024-10-07 07:49:10.303618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:10.835 [2024-10-07 07:49:10.303630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.835 [2024-10-07 07:49:10.306095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.835 [2024-10-07 07:49:10.306137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:10.835 pt2 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.835 malloc3 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.835 [2024-10-07 07:49:10.363113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:10.835 [2024-10-07 07:49:10.363175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.835 [2024-10-07 07:49:10.363199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:10.835 [2024-10-07 07:49:10.363212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.835 [2024-10-07 07:49:10.365604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.835 [2024-10-07 07:49:10.365807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:10.835 pt3 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:10.835 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.095 malloc4 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.095 [2024-10-07 07:49:10.422599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:11.095 [2024-10-07 07:49:10.422667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.095 [2024-10-07 07:49:10.422690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:11.095 [2024-10-07 07:49:10.422703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.095 [2024-10-07 07:49:10.425228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.095 [2024-10-07 07:49:10.425271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:11.095 pt4 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.095 [2024-10-07 07:49:10.434685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:11.095 [2024-10-07 07:49:10.437194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:11.095 [2024-10-07 07:49:10.437267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:11.095 [2024-10-07 07:49:10.437338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:11.095 [2024-10-07 07:49:10.437573] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:11.095 [2024-10-07 07:49:10.437592] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:11.095 [2024-10-07 07:49:10.437904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:11.095 [2024-10-07 07:49:10.446345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:11.095 [2024-10-07 07:49:10.446501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:11.095 [2024-10-07 07:49:10.446761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.095 "name": "raid_bdev1", 00:28:11.095 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:11.095 "strip_size_kb": 64, 00:28:11.095 "state": "online", 00:28:11.095 "raid_level": "raid5f", 00:28:11.095 "superblock": true, 00:28:11.095 "num_base_bdevs": 4, 00:28:11.095 "num_base_bdevs_discovered": 4, 00:28:11.095 "num_base_bdevs_operational": 4, 00:28:11.095 "base_bdevs_list": [ 00:28:11.095 { 00:28:11.095 "name": "pt1", 00:28:11.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:11.095 "is_configured": true, 00:28:11.095 "data_offset": 2048, 00:28:11.095 "data_size": 63488 00:28:11.095 }, 00:28:11.095 { 00:28:11.095 "name": "pt2", 00:28:11.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:11.095 "is_configured": true, 00:28:11.095 "data_offset": 2048, 00:28:11.095 "data_size": 63488 00:28:11.095 }, 00:28:11.095 { 00:28:11.095 "name": "pt3", 00:28:11.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:11.095 "is_configured": true, 00:28:11.095 "data_offset": 2048, 00:28:11.095 "data_size": 63488 00:28:11.095 }, 00:28:11.095 { 00:28:11.095 "name": "pt4", 00:28:11.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:11.095 "is_configured": true, 00:28:11.095 "data_offset": 2048, 00:28:11.095 "data_size": 63488 00:28:11.095 } 00:28:11.095 ] 00:28:11.095 }' 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.095 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.355 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.355 [2024-10-07 07:49:10.908353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.615 07:49:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.615 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.615 "name": "raid_bdev1", 00:28:11.615 "aliases": [ 00:28:11.615 "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113" 00:28:11.615 ], 00:28:11.615 "product_name": "Raid Volume", 00:28:11.615 "block_size": 512, 00:28:11.615 "num_blocks": 190464, 00:28:11.615 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:11.615 "assigned_rate_limits": { 00:28:11.615 "rw_ios_per_sec": 0, 00:28:11.615 "rw_mbytes_per_sec": 0, 00:28:11.615 "r_mbytes_per_sec": 0, 00:28:11.615 "w_mbytes_per_sec": 0 00:28:11.615 }, 00:28:11.615 "claimed": false, 00:28:11.615 "zoned": false, 00:28:11.615 "supported_io_types": { 00:28:11.615 "read": true, 00:28:11.615 "write": true, 00:28:11.615 "unmap": false, 00:28:11.615 "flush": false, 00:28:11.615 "reset": true, 00:28:11.615 "nvme_admin": false, 00:28:11.615 "nvme_io": false, 00:28:11.615 "nvme_io_md": false, 00:28:11.615 "write_zeroes": true, 00:28:11.615 "zcopy": false, 00:28:11.615 "get_zone_info": false, 00:28:11.615 "zone_management": false, 00:28:11.615 "zone_append": false, 00:28:11.615 "compare": false, 00:28:11.615 "compare_and_write": false, 00:28:11.615 "abort": false, 00:28:11.615 "seek_hole": false, 00:28:11.615 "seek_data": false, 00:28:11.615 "copy": false, 00:28:11.615 "nvme_iov_md": false 00:28:11.615 }, 00:28:11.615 "driver_specific": { 00:28:11.615 "raid": { 00:28:11.615 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:11.615 "strip_size_kb": 64, 00:28:11.615 "state": "online", 00:28:11.615 "raid_level": "raid5f", 00:28:11.615 "superblock": true, 00:28:11.615 "num_base_bdevs": 4, 00:28:11.615 "num_base_bdevs_discovered": 4, 00:28:11.616 "num_base_bdevs_operational": 4, 00:28:11.616 "base_bdevs_list": [ 00:28:11.616 { 00:28:11.616 "name": "pt1", 00:28:11.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:11.616 "is_configured": true, 00:28:11.616 "data_offset": 2048, 00:28:11.616 "data_size": 63488 00:28:11.616 }, 00:28:11.616 { 00:28:11.616 "name": "pt2", 00:28:11.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:11.616 "is_configured": true, 00:28:11.616 "data_offset": 2048, 00:28:11.616 "data_size": 63488 00:28:11.616 }, 00:28:11.616 { 00:28:11.616 "name": "pt3", 00:28:11.616 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:11.616 "is_configured": true, 00:28:11.616 "data_offset": 2048, 00:28:11.616 "data_size": 63488 00:28:11.616 }, 00:28:11.616 { 00:28:11.616 "name": "pt4", 00:28:11.616 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:11.616 "is_configured": true, 00:28:11.616 "data_offset": 2048, 00:28:11.616 "data_size": 63488 00:28:11.616 } 00:28:11.616 ] 00:28:11.616 } 00:28:11.616 } 00:28:11.616 }' 00:28:11.616 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.616 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:11.616 pt2 00:28:11.616 pt3 00:28:11.616 pt4' 00:28:11.616 07:49:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.616 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 [2024-10-07 07:49:11.216347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 ']' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 [2024-10-07 07:49:11.260187] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:11.876 [2024-10-07 07:49:11.260314] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:11.876 [2024-10-07 07:49:11.260534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.876 [2024-10-07 07:49:11.260663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:11.876 [2024-10-07 07:49:11.260839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:11.876 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # local es=0 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.877 [2024-10-07 07:49:11.420226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:11.877 [2024-10-07 07:49:11.422491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:11.877 [2024-10-07 07:49:11.422657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:11.877 [2024-10-07 07:49:11.422748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:11.877 [2024-10-07 07:49:11.422915] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:11.877 [2024-10-07 07:49:11.423081] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:11.877 [2024-10-07 07:49:11.423221] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:11.877 [2024-10-07 07:49:11.423336] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:28:11.877 [2024-10-07 07:49:11.423477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:11.877 [2024-10-07 07:49:11.423558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:11.877 request: 00:28:11.877 { 00:28:11.877 "name": "raid_bdev1", 00:28:11.877 "raid_level": "raid5f", 00:28:11.877 "base_bdevs": [ 00:28:11.877 "malloc1", 00:28:11.877 "malloc2", 00:28:11.877 "malloc3", 00:28:11.877 "malloc4" 00:28:11.877 ], 00:28:11.877 "strip_size_kb": 64, 00:28:11.877 "superblock": false, 00:28:11.877 "method": "bdev_raid_create", 00:28:11.877 "req_id": 1 00:28:11.877 } 00:28:11.877 Got JSON-RPC error response 00:28:11.877 response: 00:28:11.877 { 00:28:11.877 "code": -17, 00:28:11.877 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:11.877 } 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@656 -- # es=1 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:11.877 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.137 [2024-10-07 07:49:11.484261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:12.137 [2024-10-07 07:49:11.484434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.137 [2024-10-07 07:49:11.484468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:12.137 [2024-10-07 07:49:11.484484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.137 [2024-10-07 07:49:11.486987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.137 [2024-10-07 07:49:11.487033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:12.137 [2024-10-07 07:49:11.487102] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:12.137 [2024-10-07 07:49:11.487162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:12.137 pt1 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.137 "name": "raid_bdev1", 00:28:12.137 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:12.137 "strip_size_kb": 64, 00:28:12.137 "state": "configuring", 00:28:12.137 "raid_level": "raid5f", 00:28:12.137 "superblock": true, 00:28:12.137 "num_base_bdevs": 4, 00:28:12.137 "num_base_bdevs_discovered": 1, 00:28:12.137 "num_base_bdevs_operational": 4, 00:28:12.137 "base_bdevs_list": [ 00:28:12.137 { 00:28:12.137 "name": "pt1", 00:28:12.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:12.137 "is_configured": true, 00:28:12.137 "data_offset": 2048, 00:28:12.137 "data_size": 63488 00:28:12.137 }, 00:28:12.137 { 00:28:12.137 "name": null, 00:28:12.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:12.137 "is_configured": false, 00:28:12.137 "data_offset": 2048, 00:28:12.137 "data_size": 63488 00:28:12.137 }, 00:28:12.137 { 00:28:12.137 "name": null, 00:28:12.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:12.137 "is_configured": false, 00:28:12.137 "data_offset": 2048, 00:28:12.137 "data_size": 63488 00:28:12.137 }, 00:28:12.137 { 00:28:12.137 "name": null, 00:28:12.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:12.137 "is_configured": false, 00:28:12.137 "data_offset": 2048, 00:28:12.137 "data_size": 63488 00:28:12.137 } 00:28:12.137 ] 00:28:12.137 }' 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.137 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.397 [2024-10-07 07:49:11.944423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:12.397 [2024-10-07 07:49:11.944648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.397 [2024-10-07 07:49:11.944730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:12.397 [2024-10-07 07:49:11.944834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.397 [2024-10-07 07:49:11.945397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.397 [2024-10-07 07:49:11.945569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:12.397 [2024-10-07 07:49:11.945782] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:12.397 [2024-10-07 07:49:11.945822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:12.397 pt2 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.397 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.397 [2024-10-07 07:49:11.952486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.656 07:49:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.656 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.656 "name": "raid_bdev1", 00:28:12.656 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:12.656 "strip_size_kb": 64, 00:28:12.656 "state": "configuring", 00:28:12.656 "raid_level": "raid5f", 00:28:12.656 "superblock": true, 00:28:12.656 "num_base_bdevs": 4, 00:28:12.656 "num_base_bdevs_discovered": 1, 00:28:12.656 "num_base_bdevs_operational": 4, 00:28:12.656 "base_bdevs_list": [ 00:28:12.656 { 00:28:12.656 "name": "pt1", 00:28:12.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:12.656 "is_configured": true, 00:28:12.656 "data_offset": 2048, 00:28:12.656 "data_size": 63488 00:28:12.656 }, 00:28:12.656 { 00:28:12.656 "name": null, 00:28:12.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:12.656 "is_configured": false, 00:28:12.656 "data_offset": 0, 00:28:12.656 "data_size": 63488 00:28:12.656 }, 00:28:12.656 { 00:28:12.656 "name": null, 00:28:12.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:12.656 "is_configured": false, 00:28:12.656 "data_offset": 2048, 00:28:12.656 "data_size": 63488 00:28:12.656 }, 00:28:12.656 { 00:28:12.656 "name": null, 00:28:12.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:12.656 "is_configured": false, 00:28:12.656 "data_offset": 2048, 00:28:12.656 "data_size": 63488 00:28:12.656 } 00:28:12.656 ] 00:28:12.656 }' 00:28:12.656 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.656 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.916 [2024-10-07 07:49:12.452574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:12.916 [2024-10-07 07:49:12.452669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.916 [2024-10-07 07:49:12.452723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:12.916 [2024-10-07 07:49:12.452749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.916 [2024-10-07 07:49:12.453294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.916 [2024-10-07 07:49:12.453322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:12.916 [2024-10-07 07:49:12.453424] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:12.916 [2024-10-07 07:49:12.453457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:12.916 pt2 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.916 [2024-10-07 07:49:12.460553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:12.916 [2024-10-07 07:49:12.460610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.916 [2024-10-07 07:49:12.460634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:12.916 [2024-10-07 07:49:12.460646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.916 [2024-10-07 07:49:12.461101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.916 [2024-10-07 07:49:12.461128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:12.916 [2024-10-07 07:49:12.461205] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:12.916 [2024-10-07 07:49:12.461226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:12.916 pt3 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:12.916 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.916 [2024-10-07 07:49:12.468513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:12.916 [2024-10-07 07:49:12.468570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.916 [2024-10-07 07:49:12.468600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:12.916 [2024-10-07 07:49:12.468613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.916 [2024-10-07 07:49:12.469043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.916 [2024-10-07 07:49:12.469077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:12.916 [2024-10-07 07:49:12.469152] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:12.916 [2024-10-07 07:49:12.469181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:12.916 [2024-10-07 07:49:12.469343] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:12.916 [2024-10-07 07:49:12.469354] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:12.916 [2024-10-07 07:49:12.469630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:13.176 [2024-10-07 07:49:12.477866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:13.176 [2024-10-07 07:49:12.478018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:13.176 [2024-10-07 07:49:12.478371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.176 pt4 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.176 "name": "raid_bdev1", 00:28:13.176 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:13.176 "strip_size_kb": 64, 00:28:13.176 "state": "online", 00:28:13.176 "raid_level": "raid5f", 00:28:13.176 "superblock": true, 00:28:13.176 "num_base_bdevs": 4, 00:28:13.176 "num_base_bdevs_discovered": 4, 00:28:13.176 "num_base_bdevs_operational": 4, 00:28:13.176 "base_bdevs_list": [ 00:28:13.176 { 00:28:13.176 "name": "pt1", 00:28:13.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:13.176 "is_configured": true, 00:28:13.176 "data_offset": 2048, 00:28:13.176 "data_size": 63488 00:28:13.176 }, 00:28:13.176 { 00:28:13.176 "name": "pt2", 00:28:13.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.176 "is_configured": true, 00:28:13.176 "data_offset": 2048, 00:28:13.176 "data_size": 63488 00:28:13.176 }, 00:28:13.176 { 00:28:13.176 "name": "pt3", 00:28:13.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:13.176 "is_configured": true, 00:28:13.176 "data_offset": 2048, 00:28:13.176 "data_size": 63488 00:28:13.176 }, 00:28:13.176 { 00:28:13.176 "name": "pt4", 00:28:13.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:13.176 "is_configured": true, 00:28:13.176 "data_offset": 2048, 00:28:13.176 "data_size": 63488 00:28:13.176 } 00:28:13.176 ] 00:28:13.176 }' 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.176 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.461 [2024-10-07 07:49:12.972464] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.461 07:49:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.461 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:13.461 "name": "raid_bdev1", 00:28:13.461 "aliases": [ 00:28:13.461 "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113" 00:28:13.461 ], 00:28:13.461 "product_name": "Raid Volume", 00:28:13.461 "block_size": 512, 00:28:13.461 "num_blocks": 190464, 00:28:13.461 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:13.461 "assigned_rate_limits": { 00:28:13.461 "rw_ios_per_sec": 0, 00:28:13.461 "rw_mbytes_per_sec": 0, 00:28:13.461 "r_mbytes_per_sec": 0, 00:28:13.461 "w_mbytes_per_sec": 0 00:28:13.461 }, 00:28:13.461 "claimed": false, 00:28:13.461 "zoned": false, 00:28:13.461 "supported_io_types": { 00:28:13.461 "read": true, 00:28:13.461 "write": true, 00:28:13.461 "unmap": false, 00:28:13.461 "flush": false, 00:28:13.461 "reset": true, 00:28:13.461 "nvme_admin": false, 00:28:13.461 "nvme_io": false, 00:28:13.461 "nvme_io_md": false, 00:28:13.461 "write_zeroes": true, 00:28:13.461 "zcopy": false, 00:28:13.461 "get_zone_info": false, 00:28:13.461 "zone_management": false, 00:28:13.461 "zone_append": false, 00:28:13.461 "compare": false, 00:28:13.461 "compare_and_write": false, 00:28:13.461 "abort": false, 00:28:13.461 "seek_hole": false, 00:28:13.461 "seek_data": false, 00:28:13.461 "copy": false, 00:28:13.461 "nvme_iov_md": false 00:28:13.461 }, 00:28:13.461 "driver_specific": { 00:28:13.461 "raid": { 00:28:13.461 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:13.461 "strip_size_kb": 64, 00:28:13.461 "state": "online", 00:28:13.461 "raid_level": "raid5f", 00:28:13.461 "superblock": true, 00:28:13.461 "num_base_bdevs": 4, 00:28:13.461 "num_base_bdevs_discovered": 4, 00:28:13.461 "num_base_bdevs_operational": 4, 00:28:13.461 "base_bdevs_list": [ 00:28:13.461 { 00:28:13.461 "name": "pt1", 00:28:13.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:13.461 "is_configured": true, 00:28:13.461 "data_offset": 2048, 00:28:13.461 "data_size": 63488 00:28:13.461 }, 00:28:13.461 { 00:28:13.461 "name": "pt2", 00:28:13.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.461 "is_configured": true, 00:28:13.461 "data_offset": 2048, 00:28:13.461 "data_size": 63488 00:28:13.461 }, 00:28:13.461 { 00:28:13.461 "name": "pt3", 00:28:13.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:13.461 "is_configured": true, 00:28:13.461 "data_offset": 2048, 00:28:13.461 "data_size": 63488 00:28:13.461 }, 00:28:13.461 { 00:28:13.461 "name": "pt4", 00:28:13.461 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:13.461 "is_configured": true, 00:28:13.461 "data_offset": 2048, 00:28:13.461 "data_size": 63488 00:28:13.461 } 00:28:13.461 ] 00:28:13.461 } 00:28:13.461 } 00:28:13.461 }' 00:28:13.461 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:13.732 pt2 00:28:13.732 pt3 00:28:13.732 pt4' 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.732 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.733 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:13.993 [2024-10-07 07:49:13.332497] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 '!=' c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 ']' 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.993 [2024-10-07 07:49:13.376335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.993 "name": "raid_bdev1", 00:28:13.993 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:13.993 "strip_size_kb": 64, 00:28:13.993 "state": "online", 00:28:13.993 "raid_level": "raid5f", 00:28:13.993 "superblock": true, 00:28:13.993 "num_base_bdevs": 4, 00:28:13.993 "num_base_bdevs_discovered": 3, 00:28:13.993 "num_base_bdevs_operational": 3, 00:28:13.993 "base_bdevs_list": [ 00:28:13.993 { 00:28:13.993 "name": null, 00:28:13.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.993 "is_configured": false, 00:28:13.993 "data_offset": 0, 00:28:13.993 "data_size": 63488 00:28:13.993 }, 00:28:13.993 { 00:28:13.993 "name": "pt2", 00:28:13.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.993 "is_configured": true, 00:28:13.993 "data_offset": 2048, 00:28:13.993 "data_size": 63488 00:28:13.993 }, 00:28:13.993 { 00:28:13.993 "name": "pt3", 00:28:13.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:13.993 "is_configured": true, 00:28:13.993 "data_offset": 2048, 00:28:13.993 "data_size": 63488 00:28:13.993 }, 00:28:13.993 { 00:28:13.993 "name": "pt4", 00:28:13.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:13.993 "is_configured": true, 00:28:13.993 "data_offset": 2048, 00:28:13.993 "data_size": 63488 00:28:13.993 } 00:28:13.993 ] 00:28:13.993 }' 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.993 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 [2024-10-07 07:49:13.836409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.563 [2024-10-07 07:49:13.836593] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.563 [2024-10-07 07:49:13.836711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.563 [2024-10-07 07:49:13.836813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.563 [2024-10-07 07:49:13.836826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.563 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.563 [2024-10-07 07:49:13.928389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.563 [2024-10-07 07:49:13.928444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.563 [2024-10-07 07:49:13.928475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:14.564 [2024-10-07 07:49:13.928487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.564 [2024-10-07 07:49:13.931078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.564 [2024-10-07 07:49:13.931110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.564 [2024-10-07 07:49:13.931196] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:14.564 [2024-10-07 07:49:13.931241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.564 pt2 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.564 "name": "raid_bdev1", 00:28:14.564 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:14.564 "strip_size_kb": 64, 00:28:14.564 "state": "configuring", 00:28:14.564 "raid_level": "raid5f", 00:28:14.564 "superblock": true, 00:28:14.564 "num_base_bdevs": 4, 00:28:14.564 "num_base_bdevs_discovered": 1, 00:28:14.564 "num_base_bdevs_operational": 3, 00:28:14.564 "base_bdevs_list": [ 00:28:14.564 { 00:28:14.564 "name": null, 00:28:14.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.564 "is_configured": false, 00:28:14.564 "data_offset": 2048, 00:28:14.564 "data_size": 63488 00:28:14.564 }, 00:28:14.564 { 00:28:14.564 "name": "pt2", 00:28:14.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.564 "is_configured": true, 00:28:14.564 "data_offset": 2048, 00:28:14.564 "data_size": 63488 00:28:14.564 }, 00:28:14.564 { 00:28:14.564 "name": null, 00:28:14.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:14.564 "is_configured": false, 00:28:14.564 "data_offset": 2048, 00:28:14.564 "data_size": 63488 00:28:14.564 }, 00:28:14.564 { 00:28:14.564 "name": null, 00:28:14.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:14.564 "is_configured": false, 00:28:14.564 "data_offset": 2048, 00:28:14.564 "data_size": 63488 00:28:14.564 } 00:28:14.564 ] 00:28:14.564 }' 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.564 07:49:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.133 [2024-10-07 07:49:14.392551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:15.133 [2024-10-07 07:49:14.392623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.133 [2024-10-07 07:49:14.392658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:15.133 [2024-10-07 07:49:14.392681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.133 [2024-10-07 07:49:14.393305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.133 [2024-10-07 07:49:14.393337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:15.133 [2024-10-07 07:49:14.393467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:15.133 [2024-10-07 07:49:14.393509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:15.133 pt3 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.133 "name": "raid_bdev1", 00:28:15.133 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:15.133 "strip_size_kb": 64, 00:28:15.133 "state": "configuring", 00:28:15.133 "raid_level": "raid5f", 00:28:15.133 "superblock": true, 00:28:15.133 "num_base_bdevs": 4, 00:28:15.133 "num_base_bdevs_discovered": 2, 00:28:15.133 "num_base_bdevs_operational": 3, 00:28:15.133 "base_bdevs_list": [ 00:28:15.133 { 00:28:15.133 "name": null, 00:28:15.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.133 "is_configured": false, 00:28:15.133 "data_offset": 2048, 00:28:15.133 "data_size": 63488 00:28:15.133 }, 00:28:15.133 { 00:28:15.133 "name": "pt2", 00:28:15.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.133 "is_configured": true, 00:28:15.133 "data_offset": 2048, 00:28:15.133 "data_size": 63488 00:28:15.133 }, 00:28:15.133 { 00:28:15.133 "name": "pt3", 00:28:15.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:15.133 "is_configured": true, 00:28:15.133 "data_offset": 2048, 00:28:15.133 "data_size": 63488 00:28:15.133 }, 00:28:15.133 { 00:28:15.133 "name": null, 00:28:15.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:15.133 "is_configured": false, 00:28:15.133 "data_offset": 2048, 00:28:15.133 "data_size": 63488 00:28:15.133 } 00:28:15.133 ] 00:28:15.133 }' 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.133 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.394 [2024-10-07 07:49:14.856660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:15.394 [2024-10-07 07:49:14.856740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.394 [2024-10-07 07:49:14.856768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:15.394 [2024-10-07 07:49:14.856782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.394 [2024-10-07 07:49:14.857281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.394 [2024-10-07 07:49:14.857318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:15.394 [2024-10-07 07:49:14.857408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:15.394 [2024-10-07 07:49:14.857433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:15.394 [2024-10-07 07:49:14.857578] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:15.394 [2024-10-07 07:49:14.857590] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:15.394 [2024-10-07 07:49:14.857879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:15.394 [2024-10-07 07:49:14.865751] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:15.394 [2024-10-07 07:49:14.865783] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:15.394 [2024-10-07 07:49:14.866114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.394 pt4 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.394 "name": "raid_bdev1", 00:28:15.394 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:15.394 "strip_size_kb": 64, 00:28:15.394 "state": "online", 00:28:15.394 "raid_level": "raid5f", 00:28:15.394 "superblock": true, 00:28:15.394 "num_base_bdevs": 4, 00:28:15.394 "num_base_bdevs_discovered": 3, 00:28:15.394 "num_base_bdevs_operational": 3, 00:28:15.394 "base_bdevs_list": [ 00:28:15.394 { 00:28:15.394 "name": null, 00:28:15.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.394 "is_configured": false, 00:28:15.394 "data_offset": 2048, 00:28:15.394 "data_size": 63488 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "pt2", 00:28:15.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.394 "is_configured": true, 00:28:15.394 "data_offset": 2048, 00:28:15.394 "data_size": 63488 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "pt3", 00:28:15.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:15.394 "is_configured": true, 00:28:15.394 "data_offset": 2048, 00:28:15.394 "data_size": 63488 00:28:15.394 }, 00:28:15.394 { 00:28:15.394 "name": "pt4", 00:28:15.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:15.394 "is_configured": true, 00:28:15.394 "data_offset": 2048, 00:28:15.394 "data_size": 63488 00:28:15.394 } 00:28:15.394 ] 00:28:15.394 }' 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.394 07:49:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 [2024-10-07 07:49:15.292794] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:15.964 [2024-10-07 07:49:15.292970] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:15.964 [2024-10-07 07:49:15.293082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:15.964 [2024-10-07 07:49:15.293170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:15.964 [2024-10-07 07:49:15.293191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 [2024-10-07 07:49:15.360831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:15.964 [2024-10-07 07:49:15.361050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.964 [2024-10-07 07:49:15.361084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:15.964 [2024-10-07 07:49:15.361103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.964 [2024-10-07 07:49:15.364143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.964 [2024-10-07 07:49:15.364193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:15.964 [2024-10-07 07:49:15.364286] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:15.964 [2024-10-07 07:49:15.364355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:15.964 [2024-10-07 07:49:15.364531] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:15.964 [2024-10-07 07:49:15.364554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:15.964 [2024-10-07 07:49:15.364574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:28:15.964 [2024-10-07 07:49:15.364652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:15.964 [2024-10-07 07:49:15.364800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:15.964 pt1 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.964 "name": "raid_bdev1", 00:28:15.964 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:15.964 "strip_size_kb": 64, 00:28:15.964 "state": "configuring", 00:28:15.964 "raid_level": "raid5f", 00:28:15.964 "superblock": true, 00:28:15.964 "num_base_bdevs": 4, 00:28:15.964 "num_base_bdevs_discovered": 2, 00:28:15.964 "num_base_bdevs_operational": 3, 00:28:15.964 "base_bdevs_list": [ 00:28:15.964 { 00:28:15.964 "name": null, 00:28:15.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.964 "is_configured": false, 00:28:15.964 "data_offset": 2048, 00:28:15.964 "data_size": 63488 00:28:15.964 }, 00:28:15.964 { 00:28:15.964 "name": "pt2", 00:28:15.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.964 "is_configured": true, 00:28:15.964 "data_offset": 2048, 00:28:15.964 "data_size": 63488 00:28:15.964 }, 00:28:15.964 { 00:28:15.964 "name": "pt3", 00:28:15.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:15.964 "is_configured": true, 00:28:15.964 "data_offset": 2048, 00:28:15.964 "data_size": 63488 00:28:15.964 }, 00:28:15.964 { 00:28:15.964 "name": null, 00:28:15.964 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:15.964 "is_configured": false, 00:28:15.964 "data_offset": 2048, 00:28:15.964 "data_size": 63488 00:28:15.964 } 00:28:15.964 ] 00:28:15.964 }' 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.964 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.534 [2024-10-07 07:49:15.849048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:16.534 [2024-10-07 07:49:15.849117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.534 [2024-10-07 07:49:15.849149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:16.534 [2024-10-07 07:49:15.849163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.534 [2024-10-07 07:49:15.849650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.534 [2024-10-07 07:49:15.849716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:16.534 [2024-10-07 07:49:15.849812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:16.534 [2024-10-07 07:49:15.849841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:16.534 [2024-10-07 07:49:15.850004] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:28:16.534 [2024-10-07 07:49:15.850022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:16.534 [2024-10-07 07:49:15.850326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:16.534 [2024-10-07 07:49:15.859390] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:28:16.534 [2024-10-07 07:49:15.859422] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:28:16.534 [2024-10-07 07:49:15.859731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.534 pt4 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.534 "name": "raid_bdev1", 00:28:16.534 "uuid": "c81883c5-d10c-4d5f-8ccf-5bdfb3d25113", 00:28:16.534 "strip_size_kb": 64, 00:28:16.534 "state": "online", 00:28:16.534 "raid_level": "raid5f", 00:28:16.534 "superblock": true, 00:28:16.534 "num_base_bdevs": 4, 00:28:16.534 "num_base_bdevs_discovered": 3, 00:28:16.534 "num_base_bdevs_operational": 3, 00:28:16.534 "base_bdevs_list": [ 00:28:16.534 { 00:28:16.534 "name": null, 00:28:16.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.534 "is_configured": false, 00:28:16.534 "data_offset": 2048, 00:28:16.534 "data_size": 63488 00:28:16.534 }, 00:28:16.534 { 00:28:16.534 "name": "pt2", 00:28:16.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.534 "is_configured": true, 00:28:16.534 "data_offset": 2048, 00:28:16.534 "data_size": 63488 00:28:16.534 }, 00:28:16.534 { 00:28:16.534 "name": "pt3", 00:28:16.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:16.534 "is_configured": true, 00:28:16.534 "data_offset": 2048, 00:28:16.534 "data_size": 63488 00:28:16.534 }, 00:28:16.534 { 00:28:16.534 "name": "pt4", 00:28:16.534 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:16.534 "is_configured": true, 00:28:16.534 "data_offset": 2048, 00:28:16.534 "data_size": 63488 00:28:16.534 } 00:28:16.534 ] 00:28:16.534 }' 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.534 07:49:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.794 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:16.794 [2024-10-07 07:49:16.334209] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 '!=' c81883c5-d10c-4d5f-8ccf-5bdfb3d25113 ']' 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84365 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' -z 84365 ']' 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # kill -0 84365 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # uname 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 84365 00:28:17.054 killing process with pid 84365 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 84365' 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # kill 84365 00:28:17.054 [2024-10-07 07:49:16.412040] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:17.054 07:49:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@977 -- # wait 84365 00:28:17.054 [2024-10-07 07:49:16.412134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.054 [2024-10-07 07:49:16.412217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.054 [2024-10-07 07:49:16.412233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:28:17.313 [2024-10-07 07:49:16.833619] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:18.692 ************************************ 00:28:18.692 END TEST raid5f_superblock_test 00:28:18.692 ************************************ 00:28:18.692 07:49:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:18.692 00:28:18.692 real 0m9.001s 00:28:18.692 user 0m14.071s 00:28:18.692 sys 0m1.667s 00:28:18.692 07:49:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:28:18.692 07:49:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.692 07:49:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:28:18.692 07:49:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:28:18.692 07:49:18 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:28:18.692 07:49:18 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:28:18.692 07:49:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:18.692 ************************************ 00:28:18.692 START TEST raid5f_rebuild_test 00:28:18.692 ************************************ 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid5f 4 false false true 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.692 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84850 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84850 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # '[' -z 84850 ']' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local max_retries=100 00:28:18.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@843 -- # xtrace_disable 00:28:18.693 07:49:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:18.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:18.952 Zero copy mechanism will not be used. 00:28:18.952 [2024-10-07 07:49:18.356295] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:18.952 [2024-10-07 07:49:18.356479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84850 ] 00:28:19.212 [2024-10-07 07:49:18.539393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.212 [2024-10-07 07:49:18.764324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.471 [2024-10-07 07:49:18.980114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:19.472 [2024-10-07 07:49:18.980163] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # return 0 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.731 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 BaseBdev1_malloc 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 [2024-10-07 07:49:19.335171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:19.990 [2024-10-07 07:49:19.335369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.990 [2024-10-07 07:49:19.335406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:19.990 [2024-10-07 07:49:19.335426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.990 [2024-10-07 07:49:19.338038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.990 [2024-10-07 07:49:19.338085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:19.990 BaseBdev1 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 BaseBdev2_malloc 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 [2024-10-07 07:49:19.402513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:19.990 [2024-10-07 07:49:19.402583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.990 [2024-10-07 07:49:19.402606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:19.990 [2024-10-07 07:49:19.402622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.990 [2024-10-07 07:49:19.405217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.990 [2024-10-07 07:49:19.405399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:19.990 BaseBdev2 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.990 BaseBdev3_malloc 00:28:19.990 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.991 [2024-10-07 07:49:19.459680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:19.991 [2024-10-07 07:49:19.459762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.991 [2024-10-07 07:49:19.459789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:19.991 [2024-10-07 07:49:19.459805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.991 [2024-10-07 07:49:19.462443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.991 [2024-10-07 07:49:19.462494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:19.991 BaseBdev3 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.991 BaseBdev4_malloc 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:19.991 [2024-10-07 07:49:19.518114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:19.991 [2024-10-07 07:49:19.518177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.991 [2024-10-07 07:49:19.518200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:19.991 [2024-10-07 07:49:19.518215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.991 [2024-10-07 07:49:19.520798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.991 [2024-10-07 07:49:19.520847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:19.991 BaseBdev4 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:19.991 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.251 spare_malloc 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.251 spare_delay 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.251 [2024-10-07 07:49:19.584550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:20.251 [2024-10-07 07:49:19.584618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:20.251 [2024-10-07 07:49:19.584664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:20.251 [2024-10-07 07:49:19.584682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:20.251 [2024-10-07 07:49:19.587313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:20.251 [2024-10-07 07:49:19.587360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:20.251 spare 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.251 [2024-10-07 07:49:19.592618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:20.251 [2024-10-07 07:49:19.594948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:20.251 [2024-10-07 07:49:19.595043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:20.251 [2024-10-07 07:49:19.595198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:20.251 [2024-10-07 07:49:19.595334] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:20.251 [2024-10-07 07:49:19.595451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:20.251 [2024-10-07 07:49:19.595873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:20.251 [2024-10-07 07:49:19.604258] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:20.251 [2024-10-07 07:49:19.604386] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:20.251 [2024-10-07 07:49:19.604769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.251 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.252 "name": "raid_bdev1", 00:28:20.252 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:20.252 "strip_size_kb": 64, 00:28:20.252 "state": "online", 00:28:20.252 "raid_level": "raid5f", 00:28:20.252 "superblock": false, 00:28:20.252 "num_base_bdevs": 4, 00:28:20.252 "num_base_bdevs_discovered": 4, 00:28:20.252 "num_base_bdevs_operational": 4, 00:28:20.252 "base_bdevs_list": [ 00:28:20.252 { 00:28:20.252 "name": "BaseBdev1", 00:28:20.252 "uuid": "afae9ff4-28a1-51f1-8f54-c2449315c456", 00:28:20.252 "is_configured": true, 00:28:20.252 "data_offset": 0, 00:28:20.252 "data_size": 65536 00:28:20.252 }, 00:28:20.252 { 00:28:20.252 "name": "BaseBdev2", 00:28:20.252 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:20.252 "is_configured": true, 00:28:20.252 "data_offset": 0, 00:28:20.252 "data_size": 65536 00:28:20.252 }, 00:28:20.252 { 00:28:20.252 "name": "BaseBdev3", 00:28:20.252 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:20.252 "is_configured": true, 00:28:20.252 "data_offset": 0, 00:28:20.252 "data_size": 65536 00:28:20.252 }, 00:28:20.252 { 00:28:20.252 "name": "BaseBdev4", 00:28:20.252 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:20.252 "is_configured": true, 00:28:20.252 "data_offset": 0, 00:28:20.252 "data_size": 65536 00:28:20.252 } 00:28:20.252 ] 00:28:20.252 }' 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.252 07:49:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.511 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:20.511 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.511 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.511 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:20.511 [2024-10-07 07:49:20.050318] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:20.511 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:20.771 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:21.031 [2024-10-07 07:49:20.442245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:21.031 /dev/nbd0 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.031 1+0 records in 00:28:21.031 1+0 records out 00:28:21.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362162 s, 11.3 MB/s 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:28:21.031 07:49:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:28:21.599 512+0 records in 00:28:21.599 512+0 records out 00:28:21.599 100663296 bytes (101 MB, 96 MiB) copied, 0.589175 s, 171 MB/s 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:21.599 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:21.859 [2024-10-07 07:49:21.320111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 [2024-10-07 07:49:21.330606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:21.860 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.860 "name": "raid_bdev1", 00:28:21.860 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:21.860 "strip_size_kb": 64, 00:28:21.860 "state": "online", 00:28:21.860 "raid_level": "raid5f", 00:28:21.860 "superblock": false, 00:28:21.860 "num_base_bdevs": 4, 00:28:21.860 "num_base_bdevs_discovered": 3, 00:28:21.860 "num_base_bdevs_operational": 3, 00:28:21.860 "base_bdevs_list": [ 00:28:21.860 { 00:28:21.860 "name": null, 00:28:21.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.860 "is_configured": false, 00:28:21.860 "data_offset": 0, 00:28:21.860 "data_size": 65536 00:28:21.860 }, 00:28:21.860 { 00:28:21.860 "name": "BaseBdev2", 00:28:21.860 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:21.860 "is_configured": true, 00:28:21.860 "data_offset": 0, 00:28:21.860 "data_size": 65536 00:28:21.860 }, 00:28:21.860 { 00:28:21.860 "name": "BaseBdev3", 00:28:21.860 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:21.860 "is_configured": true, 00:28:21.860 "data_offset": 0, 00:28:21.860 "data_size": 65536 00:28:21.860 }, 00:28:21.860 { 00:28:21.860 "name": "BaseBdev4", 00:28:21.860 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:21.860 "is_configured": true, 00:28:21.860 "data_offset": 0, 00:28:21.860 "data_size": 65536 00:28:21.860 } 00:28:21.860 ] 00:28:21.860 }' 00:28:21.860 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.860 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.429 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:22.429 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:22.429 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.429 [2024-10-07 07:49:21.746698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:22.429 [2024-10-07 07:49:21.763993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:28:22.429 07:49:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:22.429 07:49:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:22.429 [2024-10-07 07:49:21.774966] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:23.366 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:23.366 "name": "raid_bdev1", 00:28:23.366 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:23.366 "strip_size_kb": 64, 00:28:23.366 "state": "online", 00:28:23.366 "raid_level": "raid5f", 00:28:23.366 "superblock": false, 00:28:23.366 "num_base_bdevs": 4, 00:28:23.366 "num_base_bdevs_discovered": 4, 00:28:23.366 "num_base_bdevs_operational": 4, 00:28:23.366 "process": { 00:28:23.366 "type": "rebuild", 00:28:23.366 "target": "spare", 00:28:23.366 "progress": { 00:28:23.366 "blocks": 17280, 00:28:23.366 "percent": 8 00:28:23.366 } 00:28:23.367 }, 00:28:23.367 "base_bdevs_list": [ 00:28:23.367 { 00:28:23.367 "name": "spare", 00:28:23.367 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:23.367 "is_configured": true, 00:28:23.367 "data_offset": 0, 00:28:23.367 "data_size": 65536 00:28:23.367 }, 00:28:23.367 { 00:28:23.367 "name": "BaseBdev2", 00:28:23.367 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:23.367 "is_configured": true, 00:28:23.367 "data_offset": 0, 00:28:23.367 "data_size": 65536 00:28:23.367 }, 00:28:23.367 { 00:28:23.367 "name": "BaseBdev3", 00:28:23.367 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:23.367 "is_configured": true, 00:28:23.367 "data_offset": 0, 00:28:23.367 "data_size": 65536 00:28:23.367 }, 00:28:23.367 { 00:28:23.367 "name": "BaseBdev4", 00:28:23.367 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:23.367 "is_configured": true, 00:28:23.367 "data_offset": 0, 00:28:23.367 "data_size": 65536 00:28:23.367 } 00:28:23.367 ] 00:28:23.367 }' 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:23.367 07:49:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.367 [2024-10-07 07:49:22.908898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.626 [2024-10-07 07:49:22.986745] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.626 [2024-10-07 07:49:22.986832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.626 [2024-10-07 07:49:22.986852] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.626 [2024-10-07 07:49:22.986875] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:23.626 "name": "raid_bdev1", 00:28:23.626 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:23.626 "strip_size_kb": 64, 00:28:23.626 "state": "online", 00:28:23.626 "raid_level": "raid5f", 00:28:23.626 "superblock": false, 00:28:23.626 "num_base_bdevs": 4, 00:28:23.626 "num_base_bdevs_discovered": 3, 00:28:23.626 "num_base_bdevs_operational": 3, 00:28:23.626 "base_bdevs_list": [ 00:28:23.626 { 00:28:23.626 "name": null, 00:28:23.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.626 "is_configured": false, 00:28:23.626 "data_offset": 0, 00:28:23.626 "data_size": 65536 00:28:23.626 }, 00:28:23.626 { 00:28:23.626 "name": "BaseBdev2", 00:28:23.626 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:23.626 "is_configured": true, 00:28:23.626 "data_offset": 0, 00:28:23.626 "data_size": 65536 00:28:23.626 }, 00:28:23.626 { 00:28:23.626 "name": "BaseBdev3", 00:28:23.626 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:23.626 "is_configured": true, 00:28:23.626 "data_offset": 0, 00:28:23.626 "data_size": 65536 00:28:23.626 }, 00:28:23.626 { 00:28:23.626 "name": "BaseBdev4", 00:28:23.626 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:23.626 "is_configured": true, 00:28:23.626 "data_offset": 0, 00:28:23.626 "data_size": 65536 00:28:23.626 } 00:28:23.626 ] 00:28:23.626 }' 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:23.626 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.194 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:24.195 "name": "raid_bdev1", 00:28:24.195 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:24.195 "strip_size_kb": 64, 00:28:24.195 "state": "online", 00:28:24.195 "raid_level": "raid5f", 00:28:24.195 "superblock": false, 00:28:24.195 "num_base_bdevs": 4, 00:28:24.195 "num_base_bdevs_discovered": 3, 00:28:24.195 "num_base_bdevs_operational": 3, 00:28:24.195 "base_bdevs_list": [ 00:28:24.195 { 00:28:24.195 "name": null, 00:28:24.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.195 "is_configured": false, 00:28:24.195 "data_offset": 0, 00:28:24.195 "data_size": 65536 00:28:24.195 }, 00:28:24.195 { 00:28:24.195 "name": "BaseBdev2", 00:28:24.195 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:24.195 "is_configured": true, 00:28:24.195 "data_offset": 0, 00:28:24.195 "data_size": 65536 00:28:24.195 }, 00:28:24.195 { 00:28:24.195 "name": "BaseBdev3", 00:28:24.195 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:24.195 "is_configured": true, 00:28:24.195 "data_offset": 0, 00:28:24.195 "data_size": 65536 00:28:24.195 }, 00:28:24.195 { 00:28:24.195 "name": "BaseBdev4", 00:28:24.195 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:24.195 "is_configured": true, 00:28:24.195 "data_offset": 0, 00:28:24.195 "data_size": 65536 00:28:24.195 } 00:28:24.195 ] 00:28:24.195 }' 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.195 [2024-10-07 07:49:23.605496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:24.195 [2024-10-07 07:49:23.621370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:24.195 07:49:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:24.195 [2024-10-07 07:49:23.631830] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:25.134 "name": "raid_bdev1", 00:28:25.134 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:25.134 "strip_size_kb": 64, 00:28:25.134 "state": "online", 00:28:25.134 "raid_level": "raid5f", 00:28:25.134 "superblock": false, 00:28:25.134 "num_base_bdevs": 4, 00:28:25.134 "num_base_bdevs_discovered": 4, 00:28:25.134 "num_base_bdevs_operational": 4, 00:28:25.134 "process": { 00:28:25.134 "type": "rebuild", 00:28:25.134 "target": "spare", 00:28:25.134 "progress": { 00:28:25.134 "blocks": 17280, 00:28:25.134 "percent": 8 00:28:25.134 } 00:28:25.134 }, 00:28:25.134 "base_bdevs_list": [ 00:28:25.134 { 00:28:25.134 "name": "spare", 00:28:25.134 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:25.134 "is_configured": true, 00:28:25.134 "data_offset": 0, 00:28:25.134 "data_size": 65536 00:28:25.134 }, 00:28:25.134 { 00:28:25.134 "name": "BaseBdev2", 00:28:25.134 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:25.134 "is_configured": true, 00:28:25.134 "data_offset": 0, 00:28:25.134 "data_size": 65536 00:28:25.134 }, 00:28:25.134 { 00:28:25.134 "name": "BaseBdev3", 00:28:25.134 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:25.134 "is_configured": true, 00:28:25.134 "data_offset": 0, 00:28:25.134 "data_size": 65536 00:28:25.134 }, 00:28:25.134 { 00:28:25.134 "name": "BaseBdev4", 00:28:25.134 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:25.134 "is_configured": true, 00:28:25.134 "data_offset": 0, 00:28:25.134 "data_size": 65536 00:28:25.134 } 00:28:25.134 ] 00:28:25.134 }' 00:28:25.134 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=654 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:25.393 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:25.393 "name": "raid_bdev1", 00:28:25.393 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:25.393 "strip_size_kb": 64, 00:28:25.393 "state": "online", 00:28:25.393 "raid_level": "raid5f", 00:28:25.393 "superblock": false, 00:28:25.393 "num_base_bdevs": 4, 00:28:25.393 "num_base_bdevs_discovered": 4, 00:28:25.393 "num_base_bdevs_operational": 4, 00:28:25.393 "process": { 00:28:25.393 "type": "rebuild", 00:28:25.393 "target": "spare", 00:28:25.393 "progress": { 00:28:25.393 "blocks": 21120, 00:28:25.393 "percent": 10 00:28:25.393 } 00:28:25.393 }, 00:28:25.393 "base_bdevs_list": [ 00:28:25.393 { 00:28:25.393 "name": "spare", 00:28:25.393 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:25.394 "is_configured": true, 00:28:25.394 "data_offset": 0, 00:28:25.394 "data_size": 65536 00:28:25.394 }, 00:28:25.394 { 00:28:25.394 "name": "BaseBdev2", 00:28:25.394 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:25.394 "is_configured": true, 00:28:25.394 "data_offset": 0, 00:28:25.394 "data_size": 65536 00:28:25.394 }, 00:28:25.394 { 00:28:25.394 "name": "BaseBdev3", 00:28:25.394 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:25.394 "is_configured": true, 00:28:25.394 "data_offset": 0, 00:28:25.394 "data_size": 65536 00:28:25.394 }, 00:28:25.394 { 00:28:25.394 "name": "BaseBdev4", 00:28:25.394 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:25.394 "is_configured": true, 00:28:25.394 "data_offset": 0, 00:28:25.394 "data_size": 65536 00:28:25.394 } 00:28:25.394 ] 00:28:25.394 }' 00:28:25.394 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:25.394 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.394 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:25.394 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.394 07:49:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:26.773 "name": "raid_bdev1", 00:28:26.773 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:26.773 "strip_size_kb": 64, 00:28:26.773 "state": "online", 00:28:26.773 "raid_level": "raid5f", 00:28:26.773 "superblock": false, 00:28:26.773 "num_base_bdevs": 4, 00:28:26.773 "num_base_bdevs_discovered": 4, 00:28:26.773 "num_base_bdevs_operational": 4, 00:28:26.773 "process": { 00:28:26.773 "type": "rebuild", 00:28:26.773 "target": "spare", 00:28:26.773 "progress": { 00:28:26.773 "blocks": 42240, 00:28:26.773 "percent": 21 00:28:26.773 } 00:28:26.773 }, 00:28:26.773 "base_bdevs_list": [ 00:28:26.773 { 00:28:26.773 "name": "spare", 00:28:26.773 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:26.773 "is_configured": true, 00:28:26.773 "data_offset": 0, 00:28:26.773 "data_size": 65536 00:28:26.773 }, 00:28:26.773 { 00:28:26.773 "name": "BaseBdev2", 00:28:26.773 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:26.773 "is_configured": true, 00:28:26.773 "data_offset": 0, 00:28:26.773 "data_size": 65536 00:28:26.773 }, 00:28:26.773 { 00:28:26.773 "name": "BaseBdev3", 00:28:26.773 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:26.773 "is_configured": true, 00:28:26.773 "data_offset": 0, 00:28:26.773 "data_size": 65536 00:28:26.773 }, 00:28:26.773 { 00:28:26.773 "name": "BaseBdev4", 00:28:26.773 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:26.773 "is_configured": true, 00:28:26.773 "data_offset": 0, 00:28:26.773 "data_size": 65536 00:28:26.773 } 00:28:26.773 ] 00:28:26.773 }' 00:28:26.773 07:49:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:26.773 07:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:26.773 07:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:26.773 07:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:26.773 07:49:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:27.713 "name": "raid_bdev1", 00:28:27.713 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:27.713 "strip_size_kb": 64, 00:28:27.713 "state": "online", 00:28:27.713 "raid_level": "raid5f", 00:28:27.713 "superblock": false, 00:28:27.713 "num_base_bdevs": 4, 00:28:27.713 "num_base_bdevs_discovered": 4, 00:28:27.713 "num_base_bdevs_operational": 4, 00:28:27.713 "process": { 00:28:27.713 "type": "rebuild", 00:28:27.713 "target": "spare", 00:28:27.713 "progress": { 00:28:27.713 "blocks": 65280, 00:28:27.713 "percent": 33 00:28:27.713 } 00:28:27.713 }, 00:28:27.713 "base_bdevs_list": [ 00:28:27.713 { 00:28:27.713 "name": "spare", 00:28:27.713 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:27.713 "is_configured": true, 00:28:27.713 "data_offset": 0, 00:28:27.713 "data_size": 65536 00:28:27.713 }, 00:28:27.713 { 00:28:27.713 "name": "BaseBdev2", 00:28:27.713 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:27.713 "is_configured": true, 00:28:27.713 "data_offset": 0, 00:28:27.713 "data_size": 65536 00:28:27.713 }, 00:28:27.713 { 00:28:27.713 "name": "BaseBdev3", 00:28:27.713 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:27.713 "is_configured": true, 00:28:27.713 "data_offset": 0, 00:28:27.713 "data_size": 65536 00:28:27.713 }, 00:28:27.713 { 00:28:27.713 "name": "BaseBdev4", 00:28:27.713 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:27.713 "is_configured": true, 00:28:27.713 "data_offset": 0, 00:28:27.713 "data_size": 65536 00:28:27.713 } 00:28:27.713 ] 00:28:27.713 }' 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.713 07:49:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:29.094 "name": "raid_bdev1", 00:28:29.094 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:29.094 "strip_size_kb": 64, 00:28:29.094 "state": "online", 00:28:29.094 "raid_level": "raid5f", 00:28:29.094 "superblock": false, 00:28:29.094 "num_base_bdevs": 4, 00:28:29.094 "num_base_bdevs_discovered": 4, 00:28:29.094 "num_base_bdevs_operational": 4, 00:28:29.094 "process": { 00:28:29.094 "type": "rebuild", 00:28:29.094 "target": "spare", 00:28:29.094 "progress": { 00:28:29.094 "blocks": 86400, 00:28:29.094 "percent": 43 00:28:29.094 } 00:28:29.094 }, 00:28:29.094 "base_bdevs_list": [ 00:28:29.094 { 00:28:29.094 "name": "spare", 00:28:29.094 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:29.094 "is_configured": true, 00:28:29.094 "data_offset": 0, 00:28:29.094 "data_size": 65536 00:28:29.094 }, 00:28:29.094 { 00:28:29.094 "name": "BaseBdev2", 00:28:29.094 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:29.094 "is_configured": true, 00:28:29.094 "data_offset": 0, 00:28:29.094 "data_size": 65536 00:28:29.094 }, 00:28:29.094 { 00:28:29.094 "name": "BaseBdev3", 00:28:29.094 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:29.094 "is_configured": true, 00:28:29.094 "data_offset": 0, 00:28:29.094 "data_size": 65536 00:28:29.094 }, 00:28:29.094 { 00:28:29.094 "name": "BaseBdev4", 00:28:29.094 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:29.094 "is_configured": true, 00:28:29.094 "data_offset": 0, 00:28:29.094 "data_size": 65536 00:28:29.094 } 00:28:29.094 ] 00:28:29.094 }' 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.094 07:49:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:30.032 "name": "raid_bdev1", 00:28:30.032 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:30.032 "strip_size_kb": 64, 00:28:30.032 "state": "online", 00:28:30.032 "raid_level": "raid5f", 00:28:30.032 "superblock": false, 00:28:30.032 "num_base_bdevs": 4, 00:28:30.032 "num_base_bdevs_discovered": 4, 00:28:30.032 "num_base_bdevs_operational": 4, 00:28:30.032 "process": { 00:28:30.032 "type": "rebuild", 00:28:30.032 "target": "spare", 00:28:30.032 "progress": { 00:28:30.032 "blocks": 107520, 00:28:30.032 "percent": 54 00:28:30.032 } 00:28:30.032 }, 00:28:30.032 "base_bdevs_list": [ 00:28:30.032 { 00:28:30.032 "name": "spare", 00:28:30.032 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:30.032 "is_configured": true, 00:28:30.032 "data_offset": 0, 00:28:30.032 "data_size": 65536 00:28:30.032 }, 00:28:30.032 { 00:28:30.032 "name": "BaseBdev2", 00:28:30.032 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:30.032 "is_configured": true, 00:28:30.032 "data_offset": 0, 00:28:30.032 "data_size": 65536 00:28:30.032 }, 00:28:30.032 { 00:28:30.032 "name": "BaseBdev3", 00:28:30.032 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:30.032 "is_configured": true, 00:28:30.032 "data_offset": 0, 00:28:30.032 "data_size": 65536 00:28:30.032 }, 00:28:30.032 { 00:28:30.032 "name": "BaseBdev4", 00:28:30.032 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:30.032 "is_configured": true, 00:28:30.032 "data_offset": 0, 00:28:30.032 "data_size": 65536 00:28:30.032 } 00:28:30.032 ] 00:28:30.032 }' 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.032 07:49:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:30.970 "name": "raid_bdev1", 00:28:30.970 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:30.970 "strip_size_kb": 64, 00:28:30.970 "state": "online", 00:28:30.970 "raid_level": "raid5f", 00:28:30.970 "superblock": false, 00:28:30.970 "num_base_bdevs": 4, 00:28:30.970 "num_base_bdevs_discovered": 4, 00:28:30.970 "num_base_bdevs_operational": 4, 00:28:30.970 "process": { 00:28:30.970 "type": "rebuild", 00:28:30.970 "target": "spare", 00:28:30.970 "progress": { 00:28:30.970 "blocks": 128640, 00:28:30.970 "percent": 65 00:28:30.970 } 00:28:30.970 }, 00:28:30.970 "base_bdevs_list": [ 00:28:30.970 { 00:28:30.970 "name": "spare", 00:28:30.970 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:30.970 "is_configured": true, 00:28:30.970 "data_offset": 0, 00:28:30.970 "data_size": 65536 00:28:30.970 }, 00:28:30.970 { 00:28:30.970 "name": "BaseBdev2", 00:28:30.970 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:30.970 "is_configured": true, 00:28:30.970 "data_offset": 0, 00:28:30.970 "data_size": 65536 00:28:30.970 }, 00:28:30.970 { 00:28:30.970 "name": "BaseBdev3", 00:28:30.970 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:30.970 "is_configured": true, 00:28:30.970 "data_offset": 0, 00:28:30.970 "data_size": 65536 00:28:30.970 }, 00:28:30.970 { 00:28:30.970 "name": "BaseBdev4", 00:28:30.970 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:30.970 "is_configured": true, 00:28:30.970 "data_offset": 0, 00:28:30.970 "data_size": 65536 00:28:30.970 } 00:28:30.970 ] 00:28:30.970 }' 00:28:30.970 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:31.229 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.229 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:31.229 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.229 07:49:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:32.167 "name": "raid_bdev1", 00:28:32.167 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:32.167 "strip_size_kb": 64, 00:28:32.167 "state": "online", 00:28:32.167 "raid_level": "raid5f", 00:28:32.167 "superblock": false, 00:28:32.167 "num_base_bdevs": 4, 00:28:32.167 "num_base_bdevs_discovered": 4, 00:28:32.167 "num_base_bdevs_operational": 4, 00:28:32.167 "process": { 00:28:32.167 "type": "rebuild", 00:28:32.167 "target": "spare", 00:28:32.167 "progress": { 00:28:32.167 "blocks": 151680, 00:28:32.167 "percent": 77 00:28:32.167 } 00:28:32.167 }, 00:28:32.167 "base_bdevs_list": [ 00:28:32.167 { 00:28:32.167 "name": "spare", 00:28:32.167 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:32.167 "is_configured": true, 00:28:32.167 "data_offset": 0, 00:28:32.167 "data_size": 65536 00:28:32.167 }, 00:28:32.167 { 00:28:32.167 "name": "BaseBdev2", 00:28:32.167 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:32.167 "is_configured": true, 00:28:32.167 "data_offset": 0, 00:28:32.167 "data_size": 65536 00:28:32.167 }, 00:28:32.167 { 00:28:32.167 "name": "BaseBdev3", 00:28:32.167 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:32.167 "is_configured": true, 00:28:32.167 "data_offset": 0, 00:28:32.167 "data_size": 65536 00:28:32.167 }, 00:28:32.167 { 00:28:32.167 "name": "BaseBdev4", 00:28:32.167 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:32.167 "is_configured": true, 00:28:32.167 "data_offset": 0, 00:28:32.167 "data_size": 65536 00:28:32.167 } 00:28:32.167 ] 00:28:32.167 }' 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:32.167 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:32.425 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:32.425 07:49:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:33.411 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:33.411 "name": "raid_bdev1", 00:28:33.411 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:33.411 "strip_size_kb": 64, 00:28:33.411 "state": "online", 00:28:33.411 "raid_level": "raid5f", 00:28:33.411 "superblock": false, 00:28:33.411 "num_base_bdevs": 4, 00:28:33.411 "num_base_bdevs_discovered": 4, 00:28:33.411 "num_base_bdevs_operational": 4, 00:28:33.411 "process": { 00:28:33.411 "type": "rebuild", 00:28:33.411 "target": "spare", 00:28:33.411 "progress": { 00:28:33.411 "blocks": 172800, 00:28:33.411 "percent": 87 00:28:33.411 } 00:28:33.411 }, 00:28:33.411 "base_bdevs_list": [ 00:28:33.411 { 00:28:33.411 "name": "spare", 00:28:33.411 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:33.411 "is_configured": true, 00:28:33.411 "data_offset": 0, 00:28:33.411 "data_size": 65536 00:28:33.411 }, 00:28:33.411 { 00:28:33.411 "name": "BaseBdev2", 00:28:33.411 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:33.411 "is_configured": true, 00:28:33.411 "data_offset": 0, 00:28:33.411 "data_size": 65536 00:28:33.412 }, 00:28:33.412 { 00:28:33.412 "name": "BaseBdev3", 00:28:33.412 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:33.412 "is_configured": true, 00:28:33.412 "data_offset": 0, 00:28:33.412 "data_size": 65536 00:28:33.412 }, 00:28:33.412 { 00:28:33.412 "name": "BaseBdev4", 00:28:33.412 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:33.412 "is_configured": true, 00:28:33.412 "data_offset": 0, 00:28:33.412 "data_size": 65536 00:28:33.412 } 00:28:33.412 ] 00:28:33.412 }' 00:28:33.412 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:33.412 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.412 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:33.412 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.412 07:49:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.348 07:49:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:34.607 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:34.608 "name": "raid_bdev1", 00:28:34.608 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:34.608 "strip_size_kb": 64, 00:28:34.608 "state": "online", 00:28:34.608 "raid_level": "raid5f", 00:28:34.608 "superblock": false, 00:28:34.608 "num_base_bdevs": 4, 00:28:34.608 "num_base_bdevs_discovered": 4, 00:28:34.608 "num_base_bdevs_operational": 4, 00:28:34.608 "process": { 00:28:34.608 "type": "rebuild", 00:28:34.608 "target": "spare", 00:28:34.608 "progress": { 00:28:34.608 "blocks": 193920, 00:28:34.608 "percent": 98 00:28:34.608 } 00:28:34.608 }, 00:28:34.608 "base_bdevs_list": [ 00:28:34.608 { 00:28:34.608 "name": "spare", 00:28:34.608 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:34.608 "is_configured": true, 00:28:34.608 "data_offset": 0, 00:28:34.608 "data_size": 65536 00:28:34.608 }, 00:28:34.608 { 00:28:34.608 "name": "BaseBdev2", 00:28:34.608 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:34.608 "is_configured": true, 00:28:34.608 "data_offset": 0, 00:28:34.608 "data_size": 65536 00:28:34.608 }, 00:28:34.608 { 00:28:34.608 "name": "BaseBdev3", 00:28:34.608 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:34.608 "is_configured": true, 00:28:34.608 "data_offset": 0, 00:28:34.608 "data_size": 65536 00:28:34.608 }, 00:28:34.608 { 00:28:34.608 "name": "BaseBdev4", 00:28:34.608 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:34.608 "is_configured": true, 00:28:34.608 "data_offset": 0, 00:28:34.608 "data_size": 65536 00:28:34.608 } 00:28:34.608 ] 00:28:34.608 }' 00:28:34.608 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:34.608 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.608 07:49:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:34.608 [2024-10-07 07:49:34.017301] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:34.608 [2024-10-07 07:49:34.017388] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:34.608 [2024-10-07 07:49:34.017441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.608 07:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:34.608 07:49:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:35.545 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:35.545 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.545 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:35.545 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:35.545 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:35.546 "name": "raid_bdev1", 00:28:35.546 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:35.546 "strip_size_kb": 64, 00:28:35.546 "state": "online", 00:28:35.546 "raid_level": "raid5f", 00:28:35.546 "superblock": false, 00:28:35.546 "num_base_bdevs": 4, 00:28:35.546 "num_base_bdevs_discovered": 4, 00:28:35.546 "num_base_bdevs_operational": 4, 00:28:35.546 "base_bdevs_list": [ 00:28:35.546 { 00:28:35.546 "name": "spare", 00:28:35.546 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:35.546 "is_configured": true, 00:28:35.546 "data_offset": 0, 00:28:35.546 "data_size": 65536 00:28:35.546 }, 00:28:35.546 { 00:28:35.546 "name": "BaseBdev2", 00:28:35.546 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:35.546 "is_configured": true, 00:28:35.546 "data_offset": 0, 00:28:35.546 "data_size": 65536 00:28:35.546 }, 00:28:35.546 { 00:28:35.546 "name": "BaseBdev3", 00:28:35.546 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:35.546 "is_configured": true, 00:28:35.546 "data_offset": 0, 00:28:35.546 "data_size": 65536 00:28:35.546 }, 00:28:35.546 { 00:28:35.546 "name": "BaseBdev4", 00:28:35.546 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:35.546 "is_configured": true, 00:28:35.546 "data_offset": 0, 00:28:35.546 "data_size": 65536 00:28:35.546 } 00:28:35.546 ] 00:28:35.546 }' 00:28:35.546 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:35.806 "name": "raid_bdev1", 00:28:35.806 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:35.806 "strip_size_kb": 64, 00:28:35.806 "state": "online", 00:28:35.806 "raid_level": "raid5f", 00:28:35.806 "superblock": false, 00:28:35.806 "num_base_bdevs": 4, 00:28:35.806 "num_base_bdevs_discovered": 4, 00:28:35.806 "num_base_bdevs_operational": 4, 00:28:35.806 "base_bdevs_list": [ 00:28:35.806 { 00:28:35.806 "name": "spare", 00:28:35.806 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev2", 00:28:35.806 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev3", 00:28:35.806 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev4", 00:28:35.806 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 } 00:28:35.806 ] 00:28:35.806 }' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:35.806 "name": "raid_bdev1", 00:28:35.806 "uuid": "0d6c395e-acbb-46fe-a5a8-f8f071408498", 00:28:35.806 "strip_size_kb": 64, 00:28:35.806 "state": "online", 00:28:35.806 "raid_level": "raid5f", 00:28:35.806 "superblock": false, 00:28:35.806 "num_base_bdevs": 4, 00:28:35.806 "num_base_bdevs_discovered": 4, 00:28:35.806 "num_base_bdevs_operational": 4, 00:28:35.806 "base_bdevs_list": [ 00:28:35.806 { 00:28:35.806 "name": "spare", 00:28:35.806 "uuid": "270db91f-7963-5fd5-aa95-8554878aade4", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev2", 00:28:35.806 "uuid": "e22ac1e1-0c0a-5f3d-b0a6-816ffcd40afb", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev3", 00:28:35.806 "uuid": "09ba9945-87ce-595a-87a2-a818d08c1f61", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 }, 00:28:35.806 { 00:28:35.806 "name": "BaseBdev4", 00:28:35.806 "uuid": "98c0f36f-7d8c-507d-833d-2fe36500af53", 00:28:35.806 "is_configured": true, 00:28:35.806 "data_offset": 0, 00:28:35.806 "data_size": 65536 00:28:35.806 } 00:28:35.806 ] 00:28:35.806 }' 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:35.806 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 [2024-10-07 07:49:35.760367] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:36.374 [2024-10-07 07:49:35.760411] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:36.374 [2024-10-07 07:49:35.760537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:36.374 [2024-10-07 07:49:35.760646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:36.374 [2024-10-07 07:49:35.760668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:36.374 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:36.375 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.375 07:49:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:36.634 /dev/nbd0 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.634 1+0 records in 00:28:36.634 1+0 records out 00:28:36.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319174 s, 12.8 MB/s 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.634 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:36.893 /dev/nbd1 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local i 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # break 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:36.893 1+0 records in 00:28:36.893 1+0 records out 00:28:36.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375124 s, 10.9 MB/s 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # size=4096 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # return 0 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.893 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:37.152 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.153 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.412 07:49:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:37.671 07:49:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84850 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' -z 84850 ']' 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # kill -0 84850 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # uname 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 84850 00:28:37.672 killing process with pid 84850 00:28:37.672 Received shutdown signal, test time was about 60.000000 seconds 00:28:37.672 00:28:37.672 Latency(us) 00:28:37.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.672 =================================================================================================================== 00:28:37.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # echo 'killing process with pid 84850' 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # kill 84850 00:28:37.672 [2024-10-07 07:49:37.211584] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:37.672 07:49:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@977 -- # wait 84850 00:28:38.240 [2024-10-07 07:49:37.726249] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:39.619 07:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:28:39.619 00:28:39.619 real 0m20.827s 00:28:39.619 user 0m24.860s 00:28:39.619 sys 0m2.592s 00:28:39.619 07:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # xtrace_disable 00:28:39.619 07:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.619 ************************************ 00:28:39.619 END TEST raid5f_rebuild_test 00:28:39.619 ************************************ 00:28:39.619 07:49:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:28:39.619 07:49:39 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:28:39.619 07:49:39 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:28:39.619 07:49:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:39.619 ************************************ 00:28:39.619 START TEST raid5f_rebuild_test_sb 00:28:39.619 ************************************ 00:28:39.619 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid5f 4 true false true 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85378 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85378 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # '[' -z 85378 ']' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local max_retries=100 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@843 -- # xtrace_disable 00:28:39.620 07:49:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:39.879 Zero copy mechanism will not be used. 00:28:39.879 [2024-10-07 07:49:39.251214] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:28:39.879 [2024-10-07 07:49:39.251393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85378 ] 00:28:39.879 [2024-10-07 07:49:39.436531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.138 [2024-10-07 07:49:39.656486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.397 [2024-10-07 07:49:39.884860] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:40.397 [2024-10-07 07:49:39.884911] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # return 0 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.657 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.916 BaseBdev1_malloc 00:28:40.916 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.916 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:40.916 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.916 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.916 [2024-10-07 07:49:40.245898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:40.916 [2024-10-07 07:49:40.246122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.916 [2024-10-07 07:49:40.246159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:40.916 [2024-10-07 07:49:40.246178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.917 [2024-10-07 07:49:40.248639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.917 [2024-10-07 07:49:40.248684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:40.917 BaseBdev1 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 BaseBdev2_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 [2024-10-07 07:49:40.316311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:40.917 [2024-10-07 07:49:40.316520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.917 [2024-10-07 07:49:40.316553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:40.917 [2024-10-07 07:49:40.316568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.917 [2024-10-07 07:49:40.318946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.917 [2024-10-07 07:49:40.318991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:40.917 BaseBdev2 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 BaseBdev3_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 [2024-10-07 07:49:40.375893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:40.917 [2024-10-07 07:49:40.375953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.917 [2024-10-07 07:49:40.375995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:40.917 [2024-10-07 07:49:40.376011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.917 [2024-10-07 07:49:40.378527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.917 [2024-10-07 07:49:40.378572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:40.917 BaseBdev3 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 BaseBdev4_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.917 [2024-10-07 07:49:40.436747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:40.917 [2024-10-07 07:49:40.436809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.917 [2024-10-07 07:49:40.436833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:40.917 [2024-10-07 07:49:40.436848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.917 [2024-10-07 07:49:40.439382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.917 [2024-10-07 07:49:40.439430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:40.917 BaseBdev4 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:40.917 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.176 spare_malloc 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.176 spare_delay 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.176 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.176 [2024-10-07 07:49:40.506972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:41.176 [2024-10-07 07:49:40.507033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.176 [2024-10-07 07:49:40.507056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:41.176 [2024-10-07 07:49:40.507070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.176 [2024-10-07 07:49:40.509509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.176 [2024-10-07 07:49:40.509553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:41.176 spare 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.177 [2024-10-07 07:49:40.519041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:41.177 [2024-10-07 07:49:40.521086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:41.177 [2024-10-07 07:49:40.521277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:41.177 [2024-10-07 07:49:40.521342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:41.177 [2024-10-07 07:49:40.521528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:41.177 [2024-10-07 07:49:40.521542] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:41.177 [2024-10-07 07:49:40.521822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:41.177 [2024-10-07 07:49:40.529062] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:41.177 [2024-10-07 07:49:40.529084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:41.177 [2024-10-07 07:49:40.529263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:41.177 "name": "raid_bdev1", 00:28:41.177 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:41.177 "strip_size_kb": 64, 00:28:41.177 "state": "online", 00:28:41.177 "raid_level": "raid5f", 00:28:41.177 "superblock": true, 00:28:41.177 "num_base_bdevs": 4, 00:28:41.177 "num_base_bdevs_discovered": 4, 00:28:41.177 "num_base_bdevs_operational": 4, 00:28:41.177 "base_bdevs_list": [ 00:28:41.177 { 00:28:41.177 "name": "BaseBdev1", 00:28:41.177 "uuid": "4fea2512-2806-5739-aa38-b1464a9d766d", 00:28:41.177 "is_configured": true, 00:28:41.177 "data_offset": 2048, 00:28:41.177 "data_size": 63488 00:28:41.177 }, 00:28:41.177 { 00:28:41.177 "name": "BaseBdev2", 00:28:41.177 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:41.177 "is_configured": true, 00:28:41.177 "data_offset": 2048, 00:28:41.177 "data_size": 63488 00:28:41.177 }, 00:28:41.177 { 00:28:41.177 "name": "BaseBdev3", 00:28:41.177 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:41.177 "is_configured": true, 00:28:41.177 "data_offset": 2048, 00:28:41.177 "data_size": 63488 00:28:41.177 }, 00:28:41.177 { 00:28:41.177 "name": "BaseBdev4", 00:28:41.177 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:41.177 "is_configured": true, 00:28:41.177 "data_offset": 2048, 00:28:41.177 "data_size": 63488 00:28:41.177 } 00:28:41.177 ] 00:28:41.177 }' 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:41.177 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.437 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:41.437 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.437 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.437 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:41.437 [2024-10-07 07:49:40.969965] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:41.437 07:49:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:41.696 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:41.956 [2024-10-07 07:49:41.337896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:41.956 /dev/nbd0 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:41.956 1+0 records in 00:28:41.956 1+0 records out 00:28:41.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288817 s, 14.2 MB/s 00:28:41.956 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:28:41.957 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:28:42.527 496+0 records in 00:28:42.527 496+0 records out 00:28:42.527 97517568 bytes (98 MB, 93 MiB) copied, 0.548034 s, 178 MB/s 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:42.527 07:49:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:42.786 [2024-10-07 07:49:42.169067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:42.786 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:42.787 [2024-10-07 07:49:42.204222] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:42.787 "name": "raid_bdev1", 00:28:42.787 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:42.787 "strip_size_kb": 64, 00:28:42.787 "state": "online", 00:28:42.787 "raid_level": "raid5f", 00:28:42.787 "superblock": true, 00:28:42.787 "num_base_bdevs": 4, 00:28:42.787 "num_base_bdevs_discovered": 3, 00:28:42.787 "num_base_bdevs_operational": 3, 00:28:42.787 "base_bdevs_list": [ 00:28:42.787 { 00:28:42.787 "name": null, 00:28:42.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.787 "is_configured": false, 00:28:42.787 "data_offset": 0, 00:28:42.787 "data_size": 63488 00:28:42.787 }, 00:28:42.787 { 00:28:42.787 "name": "BaseBdev2", 00:28:42.787 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:42.787 "is_configured": true, 00:28:42.787 "data_offset": 2048, 00:28:42.787 "data_size": 63488 00:28:42.787 }, 00:28:42.787 { 00:28:42.787 "name": "BaseBdev3", 00:28:42.787 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:42.787 "is_configured": true, 00:28:42.787 "data_offset": 2048, 00:28:42.787 "data_size": 63488 00:28:42.787 }, 00:28:42.787 { 00:28:42.787 "name": "BaseBdev4", 00:28:42.787 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:42.787 "is_configured": true, 00:28:42.787 "data_offset": 2048, 00:28:42.787 "data_size": 63488 00:28:42.787 } 00:28:42.787 ] 00:28:42.787 }' 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:42.787 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.046 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:43.046 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:43.046 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.046 [2024-10-07 07:49:42.604312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:43.305 [2024-10-07 07:49:42.622374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:28:43.305 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:43.305 07:49:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:43.305 [2024-10-07 07:49:42.634274] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:44.243 "name": "raid_bdev1", 00:28:44.243 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:44.243 "strip_size_kb": 64, 00:28:44.243 "state": "online", 00:28:44.243 "raid_level": "raid5f", 00:28:44.243 "superblock": true, 00:28:44.243 "num_base_bdevs": 4, 00:28:44.243 "num_base_bdevs_discovered": 4, 00:28:44.243 "num_base_bdevs_operational": 4, 00:28:44.243 "process": { 00:28:44.243 "type": "rebuild", 00:28:44.243 "target": "spare", 00:28:44.243 "progress": { 00:28:44.243 "blocks": 17280, 00:28:44.243 "percent": 9 00:28:44.243 } 00:28:44.243 }, 00:28:44.243 "base_bdevs_list": [ 00:28:44.243 { 00:28:44.243 "name": "spare", 00:28:44.243 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:44.243 "is_configured": true, 00:28:44.243 "data_offset": 2048, 00:28:44.243 "data_size": 63488 00:28:44.243 }, 00:28:44.243 { 00:28:44.243 "name": "BaseBdev2", 00:28:44.243 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:44.243 "is_configured": true, 00:28:44.243 "data_offset": 2048, 00:28:44.243 "data_size": 63488 00:28:44.243 }, 00:28:44.243 { 00:28:44.243 "name": "BaseBdev3", 00:28:44.243 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:44.243 "is_configured": true, 00:28:44.243 "data_offset": 2048, 00:28:44.243 "data_size": 63488 00:28:44.243 }, 00:28:44.243 { 00:28:44.243 "name": "BaseBdev4", 00:28:44.243 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:44.243 "is_configured": true, 00:28:44.243 "data_offset": 2048, 00:28:44.243 "data_size": 63488 00:28:44.243 } 00:28:44.243 ] 00:28:44.243 }' 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.243 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.243 [2024-10-07 07:49:43.763456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:44.503 [2024-10-07 07:49:43.845392] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:44.503 [2024-10-07 07:49:43.845510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.503 [2024-10-07 07:49:43.845533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:44.503 [2024-10-07 07:49:43.845551] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:44.503 "name": "raid_bdev1", 00:28:44.503 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:44.503 "strip_size_kb": 64, 00:28:44.503 "state": "online", 00:28:44.503 "raid_level": "raid5f", 00:28:44.503 "superblock": true, 00:28:44.503 "num_base_bdevs": 4, 00:28:44.503 "num_base_bdevs_discovered": 3, 00:28:44.503 "num_base_bdevs_operational": 3, 00:28:44.503 "base_bdevs_list": [ 00:28:44.503 { 00:28:44.503 "name": null, 00:28:44.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.503 "is_configured": false, 00:28:44.503 "data_offset": 0, 00:28:44.503 "data_size": 63488 00:28:44.503 }, 00:28:44.503 { 00:28:44.503 "name": "BaseBdev2", 00:28:44.503 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:44.503 "is_configured": true, 00:28:44.503 "data_offset": 2048, 00:28:44.503 "data_size": 63488 00:28:44.503 }, 00:28:44.503 { 00:28:44.503 "name": "BaseBdev3", 00:28:44.503 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:44.503 "is_configured": true, 00:28:44.503 "data_offset": 2048, 00:28:44.503 "data_size": 63488 00:28:44.503 }, 00:28:44.503 { 00:28:44.503 "name": "BaseBdev4", 00:28:44.503 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:44.503 "is_configured": true, 00:28:44.503 "data_offset": 2048, 00:28:44.503 "data_size": 63488 00:28:44.503 } 00:28:44.503 ] 00:28:44.503 }' 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:44.503 07:49:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.762 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:45.021 "name": "raid_bdev1", 00:28:45.021 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:45.021 "strip_size_kb": 64, 00:28:45.021 "state": "online", 00:28:45.021 "raid_level": "raid5f", 00:28:45.021 "superblock": true, 00:28:45.021 "num_base_bdevs": 4, 00:28:45.021 "num_base_bdevs_discovered": 3, 00:28:45.021 "num_base_bdevs_operational": 3, 00:28:45.021 "base_bdevs_list": [ 00:28:45.021 { 00:28:45.021 "name": null, 00:28:45.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.021 "is_configured": false, 00:28:45.021 "data_offset": 0, 00:28:45.021 "data_size": 63488 00:28:45.021 }, 00:28:45.021 { 00:28:45.021 "name": "BaseBdev2", 00:28:45.021 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:45.021 "is_configured": true, 00:28:45.021 "data_offset": 2048, 00:28:45.021 "data_size": 63488 00:28:45.021 }, 00:28:45.021 { 00:28:45.021 "name": "BaseBdev3", 00:28:45.021 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:45.021 "is_configured": true, 00:28:45.021 "data_offset": 2048, 00:28:45.021 "data_size": 63488 00:28:45.021 }, 00:28:45.021 { 00:28:45.021 "name": "BaseBdev4", 00:28:45.021 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:45.021 "is_configured": true, 00:28:45.021 "data_offset": 2048, 00:28:45.021 "data_size": 63488 00:28:45.021 } 00:28:45.021 ] 00:28:45.021 }' 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.021 [2024-10-07 07:49:44.434676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:45.021 [2024-10-07 07:49:44.452848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:45.021 07:49:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:45.021 [2024-10-07 07:49:44.466067] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:45.959 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:45.960 "name": "raid_bdev1", 00:28:45.960 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:45.960 "strip_size_kb": 64, 00:28:45.960 "state": "online", 00:28:45.960 "raid_level": "raid5f", 00:28:45.960 "superblock": true, 00:28:45.960 "num_base_bdevs": 4, 00:28:45.960 "num_base_bdevs_discovered": 4, 00:28:45.960 "num_base_bdevs_operational": 4, 00:28:45.960 "process": { 00:28:45.960 "type": "rebuild", 00:28:45.960 "target": "spare", 00:28:45.960 "progress": { 00:28:45.960 "blocks": 17280, 00:28:45.960 "percent": 9 00:28:45.960 } 00:28:45.960 }, 00:28:45.960 "base_bdevs_list": [ 00:28:45.960 { 00:28:45.960 "name": "spare", 00:28:45.960 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:45.960 "is_configured": true, 00:28:45.960 "data_offset": 2048, 00:28:45.960 "data_size": 63488 00:28:45.960 }, 00:28:45.960 { 00:28:45.960 "name": "BaseBdev2", 00:28:45.960 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:45.960 "is_configured": true, 00:28:45.960 "data_offset": 2048, 00:28:45.960 "data_size": 63488 00:28:45.960 }, 00:28:45.960 { 00:28:45.960 "name": "BaseBdev3", 00:28:45.960 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:45.960 "is_configured": true, 00:28:45.960 "data_offset": 2048, 00:28:45.960 "data_size": 63488 00:28:45.960 }, 00:28:45.960 { 00:28:45.960 "name": "BaseBdev4", 00:28:45.960 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:45.960 "is_configured": true, 00:28:45.960 "data_offset": 2048, 00:28:45.960 "data_size": 63488 00:28:45.960 } 00:28:45.960 ] 00:28:45.960 }' 00:28:45.960 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:46.219 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=675 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:46.219 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:46.220 "name": "raid_bdev1", 00:28:46.220 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:46.220 "strip_size_kb": 64, 00:28:46.220 "state": "online", 00:28:46.220 "raid_level": "raid5f", 00:28:46.220 "superblock": true, 00:28:46.220 "num_base_bdevs": 4, 00:28:46.220 "num_base_bdevs_discovered": 4, 00:28:46.220 "num_base_bdevs_operational": 4, 00:28:46.220 "process": { 00:28:46.220 "type": "rebuild", 00:28:46.220 "target": "spare", 00:28:46.220 "progress": { 00:28:46.220 "blocks": 21120, 00:28:46.220 "percent": 11 00:28:46.220 } 00:28:46.220 }, 00:28:46.220 "base_bdevs_list": [ 00:28:46.220 { 00:28:46.220 "name": "spare", 00:28:46.220 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:46.220 "is_configured": true, 00:28:46.220 "data_offset": 2048, 00:28:46.220 "data_size": 63488 00:28:46.220 }, 00:28:46.220 { 00:28:46.220 "name": "BaseBdev2", 00:28:46.220 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:46.220 "is_configured": true, 00:28:46.220 "data_offset": 2048, 00:28:46.220 "data_size": 63488 00:28:46.220 }, 00:28:46.220 { 00:28:46.220 "name": "BaseBdev3", 00:28:46.220 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:46.220 "is_configured": true, 00:28:46.220 "data_offset": 2048, 00:28:46.220 "data_size": 63488 00:28:46.220 }, 00:28:46.220 { 00:28:46.220 "name": "BaseBdev4", 00:28:46.220 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:46.220 "is_configured": true, 00:28:46.220 "data_offset": 2048, 00:28:46.220 "data_size": 63488 00:28:46.220 } 00:28:46.220 ] 00:28:46.220 }' 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.220 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:46.479 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.479 07:49:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:47.416 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:47.416 "name": "raid_bdev1", 00:28:47.416 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:47.416 "strip_size_kb": 64, 00:28:47.416 "state": "online", 00:28:47.416 "raid_level": "raid5f", 00:28:47.416 "superblock": true, 00:28:47.416 "num_base_bdevs": 4, 00:28:47.416 "num_base_bdevs_discovered": 4, 00:28:47.416 "num_base_bdevs_operational": 4, 00:28:47.416 "process": { 00:28:47.416 "type": "rebuild", 00:28:47.417 "target": "spare", 00:28:47.417 "progress": { 00:28:47.417 "blocks": 44160, 00:28:47.417 "percent": 23 00:28:47.417 } 00:28:47.417 }, 00:28:47.417 "base_bdevs_list": [ 00:28:47.417 { 00:28:47.417 "name": "spare", 00:28:47.417 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:47.417 "is_configured": true, 00:28:47.417 "data_offset": 2048, 00:28:47.417 "data_size": 63488 00:28:47.417 }, 00:28:47.417 { 00:28:47.417 "name": "BaseBdev2", 00:28:47.417 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:47.417 "is_configured": true, 00:28:47.417 "data_offset": 2048, 00:28:47.417 "data_size": 63488 00:28:47.417 }, 00:28:47.417 { 00:28:47.417 "name": "BaseBdev3", 00:28:47.417 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:47.417 "is_configured": true, 00:28:47.417 "data_offset": 2048, 00:28:47.417 "data_size": 63488 00:28:47.417 }, 00:28:47.417 { 00:28:47.417 "name": "BaseBdev4", 00:28:47.417 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:47.417 "is_configured": true, 00:28:47.417 "data_offset": 2048, 00:28:47.417 "data_size": 63488 00:28:47.417 } 00:28:47.417 ] 00:28:47.417 }' 00:28:47.417 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:47.417 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:47.417 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:47.417 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:47.417 07:49:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:48.796 "name": "raid_bdev1", 00:28:48.796 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:48.796 "strip_size_kb": 64, 00:28:48.796 "state": "online", 00:28:48.796 "raid_level": "raid5f", 00:28:48.796 "superblock": true, 00:28:48.796 "num_base_bdevs": 4, 00:28:48.796 "num_base_bdevs_discovered": 4, 00:28:48.796 "num_base_bdevs_operational": 4, 00:28:48.796 "process": { 00:28:48.796 "type": "rebuild", 00:28:48.796 "target": "spare", 00:28:48.796 "progress": { 00:28:48.796 "blocks": 65280, 00:28:48.796 "percent": 34 00:28:48.796 } 00:28:48.796 }, 00:28:48.796 "base_bdevs_list": [ 00:28:48.796 { 00:28:48.796 "name": "spare", 00:28:48.796 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:48.796 "is_configured": true, 00:28:48.796 "data_offset": 2048, 00:28:48.796 "data_size": 63488 00:28:48.796 }, 00:28:48.796 { 00:28:48.796 "name": "BaseBdev2", 00:28:48.796 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:48.796 "is_configured": true, 00:28:48.796 "data_offset": 2048, 00:28:48.796 "data_size": 63488 00:28:48.796 }, 00:28:48.796 { 00:28:48.796 "name": "BaseBdev3", 00:28:48.796 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:48.796 "is_configured": true, 00:28:48.796 "data_offset": 2048, 00:28:48.796 "data_size": 63488 00:28:48.796 }, 00:28:48.796 { 00:28:48.796 "name": "BaseBdev4", 00:28:48.796 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:48.796 "is_configured": true, 00:28:48.796 "data_offset": 2048, 00:28:48.796 "data_size": 63488 00:28:48.796 } 00:28:48.796 ] 00:28:48.796 }' 00:28:48.796 07:49:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:48.796 07:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:48.796 07:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:48.796 07:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:48.796 07:49:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:49.732 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:49.733 "name": "raid_bdev1", 00:28:49.733 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:49.733 "strip_size_kb": 64, 00:28:49.733 "state": "online", 00:28:49.733 "raid_level": "raid5f", 00:28:49.733 "superblock": true, 00:28:49.733 "num_base_bdevs": 4, 00:28:49.733 "num_base_bdevs_discovered": 4, 00:28:49.733 "num_base_bdevs_operational": 4, 00:28:49.733 "process": { 00:28:49.733 "type": "rebuild", 00:28:49.733 "target": "spare", 00:28:49.733 "progress": { 00:28:49.733 "blocks": 86400, 00:28:49.733 "percent": 45 00:28:49.733 } 00:28:49.733 }, 00:28:49.733 "base_bdevs_list": [ 00:28:49.733 { 00:28:49.733 "name": "spare", 00:28:49.733 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:49.733 "is_configured": true, 00:28:49.733 "data_offset": 2048, 00:28:49.733 "data_size": 63488 00:28:49.733 }, 00:28:49.733 { 00:28:49.733 "name": "BaseBdev2", 00:28:49.733 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:49.733 "is_configured": true, 00:28:49.733 "data_offset": 2048, 00:28:49.733 "data_size": 63488 00:28:49.733 }, 00:28:49.733 { 00:28:49.733 "name": "BaseBdev3", 00:28:49.733 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:49.733 "is_configured": true, 00:28:49.733 "data_offset": 2048, 00:28:49.733 "data_size": 63488 00:28:49.733 }, 00:28:49.733 { 00:28:49.733 "name": "BaseBdev4", 00:28:49.733 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:49.733 "is_configured": true, 00:28:49.733 "data_offset": 2048, 00:28:49.733 "data_size": 63488 00:28:49.733 } 00:28:49.733 ] 00:28:49.733 }' 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.733 07:49:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:51.111 "name": "raid_bdev1", 00:28:51.111 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:51.111 "strip_size_kb": 64, 00:28:51.111 "state": "online", 00:28:51.111 "raid_level": "raid5f", 00:28:51.111 "superblock": true, 00:28:51.111 "num_base_bdevs": 4, 00:28:51.111 "num_base_bdevs_discovered": 4, 00:28:51.111 "num_base_bdevs_operational": 4, 00:28:51.111 "process": { 00:28:51.111 "type": "rebuild", 00:28:51.111 "target": "spare", 00:28:51.111 "progress": { 00:28:51.111 "blocks": 109440, 00:28:51.111 "percent": 57 00:28:51.111 } 00:28:51.111 }, 00:28:51.111 "base_bdevs_list": [ 00:28:51.111 { 00:28:51.111 "name": "spare", 00:28:51.111 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:51.111 "is_configured": true, 00:28:51.111 "data_offset": 2048, 00:28:51.111 "data_size": 63488 00:28:51.111 }, 00:28:51.111 { 00:28:51.111 "name": "BaseBdev2", 00:28:51.111 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:51.111 "is_configured": true, 00:28:51.111 "data_offset": 2048, 00:28:51.111 "data_size": 63488 00:28:51.111 }, 00:28:51.111 { 00:28:51.111 "name": "BaseBdev3", 00:28:51.111 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:51.111 "is_configured": true, 00:28:51.111 "data_offset": 2048, 00:28:51.111 "data_size": 63488 00:28:51.111 }, 00:28:51.111 { 00:28:51.111 "name": "BaseBdev4", 00:28:51.111 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:51.111 "is_configured": true, 00:28:51.111 "data_offset": 2048, 00:28:51.111 "data_size": 63488 00:28:51.111 } 00:28:51.111 ] 00:28:51.111 }' 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.111 07:49:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:52.047 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.047 "name": "raid_bdev1", 00:28:52.047 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:52.047 "strip_size_kb": 64, 00:28:52.047 "state": "online", 00:28:52.048 "raid_level": "raid5f", 00:28:52.048 "superblock": true, 00:28:52.048 "num_base_bdevs": 4, 00:28:52.048 "num_base_bdevs_discovered": 4, 00:28:52.048 "num_base_bdevs_operational": 4, 00:28:52.048 "process": { 00:28:52.048 "type": "rebuild", 00:28:52.048 "target": "spare", 00:28:52.048 "progress": { 00:28:52.048 "blocks": 130560, 00:28:52.048 "percent": 68 00:28:52.048 } 00:28:52.048 }, 00:28:52.048 "base_bdevs_list": [ 00:28:52.048 { 00:28:52.048 "name": "spare", 00:28:52.048 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:52.048 "is_configured": true, 00:28:52.048 "data_offset": 2048, 00:28:52.048 "data_size": 63488 00:28:52.048 }, 00:28:52.048 { 00:28:52.048 "name": "BaseBdev2", 00:28:52.048 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:52.048 "is_configured": true, 00:28:52.048 "data_offset": 2048, 00:28:52.048 "data_size": 63488 00:28:52.048 }, 00:28:52.048 { 00:28:52.048 "name": "BaseBdev3", 00:28:52.048 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:52.048 "is_configured": true, 00:28:52.048 "data_offset": 2048, 00:28:52.048 "data_size": 63488 00:28:52.048 }, 00:28:52.048 { 00:28:52.048 "name": "BaseBdev4", 00:28:52.048 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:52.048 "is_configured": true, 00:28:52.048 "data_offset": 2048, 00:28:52.048 "data_size": 63488 00:28:52.048 } 00:28:52.048 ] 00:28:52.048 }' 00:28:52.048 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.048 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.048 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.048 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.048 07:49:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.982 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:53.239 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:53.239 "name": "raid_bdev1", 00:28:53.239 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:53.239 "strip_size_kb": 64, 00:28:53.239 "state": "online", 00:28:53.239 "raid_level": "raid5f", 00:28:53.239 "superblock": true, 00:28:53.239 "num_base_bdevs": 4, 00:28:53.239 "num_base_bdevs_discovered": 4, 00:28:53.239 "num_base_bdevs_operational": 4, 00:28:53.239 "process": { 00:28:53.239 "type": "rebuild", 00:28:53.239 "target": "spare", 00:28:53.239 "progress": { 00:28:53.239 "blocks": 151680, 00:28:53.239 "percent": 79 00:28:53.239 } 00:28:53.239 }, 00:28:53.239 "base_bdevs_list": [ 00:28:53.239 { 00:28:53.239 "name": "spare", 00:28:53.239 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:53.239 "is_configured": true, 00:28:53.239 "data_offset": 2048, 00:28:53.239 "data_size": 63488 00:28:53.239 }, 00:28:53.239 { 00:28:53.239 "name": "BaseBdev2", 00:28:53.239 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:53.239 "is_configured": true, 00:28:53.239 "data_offset": 2048, 00:28:53.239 "data_size": 63488 00:28:53.239 }, 00:28:53.239 { 00:28:53.239 "name": "BaseBdev3", 00:28:53.239 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:53.239 "is_configured": true, 00:28:53.239 "data_offset": 2048, 00:28:53.239 "data_size": 63488 00:28:53.239 }, 00:28:53.239 { 00:28:53.239 "name": "BaseBdev4", 00:28:53.239 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:53.239 "is_configured": true, 00:28:53.239 "data_offset": 2048, 00:28:53.239 "data_size": 63488 00:28:53.239 } 00:28:53.239 ] 00:28:53.239 }' 00:28:53.239 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:53.239 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.240 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:53.240 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.240 07:49:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:54.172 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:54.172 "name": "raid_bdev1", 00:28:54.172 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:54.172 "strip_size_kb": 64, 00:28:54.172 "state": "online", 00:28:54.172 "raid_level": "raid5f", 00:28:54.172 "superblock": true, 00:28:54.172 "num_base_bdevs": 4, 00:28:54.172 "num_base_bdevs_discovered": 4, 00:28:54.172 "num_base_bdevs_operational": 4, 00:28:54.172 "process": { 00:28:54.172 "type": "rebuild", 00:28:54.172 "target": "spare", 00:28:54.172 "progress": { 00:28:54.172 "blocks": 174720, 00:28:54.172 "percent": 91 00:28:54.172 } 00:28:54.172 }, 00:28:54.172 "base_bdevs_list": [ 00:28:54.172 { 00:28:54.172 "name": "spare", 00:28:54.172 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:54.172 "is_configured": true, 00:28:54.172 "data_offset": 2048, 00:28:54.172 "data_size": 63488 00:28:54.172 }, 00:28:54.172 { 00:28:54.172 "name": "BaseBdev2", 00:28:54.172 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:54.172 "is_configured": true, 00:28:54.173 "data_offset": 2048, 00:28:54.173 "data_size": 63488 00:28:54.173 }, 00:28:54.173 { 00:28:54.173 "name": "BaseBdev3", 00:28:54.173 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:54.173 "is_configured": true, 00:28:54.173 "data_offset": 2048, 00:28:54.173 "data_size": 63488 00:28:54.173 }, 00:28:54.173 { 00:28:54.173 "name": "BaseBdev4", 00:28:54.173 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:54.173 "is_configured": true, 00:28:54.173 "data_offset": 2048, 00:28:54.173 "data_size": 63488 00:28:54.173 } 00:28:54.173 ] 00:28:54.173 }' 00:28:54.173 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:54.430 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.430 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:54.430 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.430 07:49:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:54.994 [2024-10-07 07:49:54.548750] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:54.994 [2024-10-07 07:49:54.548844] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:54.994 [2024-10-07 07:49:54.549007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:55.252 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:55.510 "name": "raid_bdev1", 00:28:55.510 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:55.510 "strip_size_kb": 64, 00:28:55.510 "state": "online", 00:28:55.510 "raid_level": "raid5f", 00:28:55.510 "superblock": true, 00:28:55.510 "num_base_bdevs": 4, 00:28:55.510 "num_base_bdevs_discovered": 4, 00:28:55.510 "num_base_bdevs_operational": 4, 00:28:55.510 "base_bdevs_list": [ 00:28:55.510 { 00:28:55.510 "name": "spare", 00:28:55.510 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev2", 00:28:55.510 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev3", 00:28:55.510 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev4", 00:28:55.510 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 } 00:28:55.510 ] 00:28:55.510 }' 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:55.510 "name": "raid_bdev1", 00:28:55.510 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:55.510 "strip_size_kb": 64, 00:28:55.510 "state": "online", 00:28:55.510 "raid_level": "raid5f", 00:28:55.510 "superblock": true, 00:28:55.510 "num_base_bdevs": 4, 00:28:55.510 "num_base_bdevs_discovered": 4, 00:28:55.510 "num_base_bdevs_operational": 4, 00:28:55.510 "base_bdevs_list": [ 00:28:55.510 { 00:28:55.510 "name": "spare", 00:28:55.510 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev2", 00:28:55.510 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev3", 00:28:55.510 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 }, 00:28:55.510 { 00:28:55.510 "name": "BaseBdev4", 00:28:55.510 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:55.510 "is_configured": true, 00:28:55.510 "data_offset": 2048, 00:28:55.510 "data_size": 63488 00:28:55.510 } 00:28:55.510 ] 00:28:55.510 }' 00:28:55.510 07:49:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:55.510 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:55.510 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:55.768 "name": "raid_bdev1", 00:28:55.768 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:55.768 "strip_size_kb": 64, 00:28:55.768 "state": "online", 00:28:55.768 "raid_level": "raid5f", 00:28:55.768 "superblock": true, 00:28:55.768 "num_base_bdevs": 4, 00:28:55.768 "num_base_bdevs_discovered": 4, 00:28:55.768 "num_base_bdevs_operational": 4, 00:28:55.768 "base_bdevs_list": [ 00:28:55.768 { 00:28:55.768 "name": "spare", 00:28:55.768 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:55.768 "is_configured": true, 00:28:55.768 "data_offset": 2048, 00:28:55.768 "data_size": 63488 00:28:55.768 }, 00:28:55.768 { 00:28:55.768 "name": "BaseBdev2", 00:28:55.768 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:55.768 "is_configured": true, 00:28:55.768 "data_offset": 2048, 00:28:55.768 "data_size": 63488 00:28:55.768 }, 00:28:55.768 { 00:28:55.768 "name": "BaseBdev3", 00:28:55.768 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:55.768 "is_configured": true, 00:28:55.768 "data_offset": 2048, 00:28:55.768 "data_size": 63488 00:28:55.768 }, 00:28:55.768 { 00:28:55.768 "name": "BaseBdev4", 00:28:55.768 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:55.768 "is_configured": true, 00:28:55.768 "data_offset": 2048, 00:28:55.768 "data_size": 63488 00:28:55.768 } 00:28:55.768 ] 00:28:55.768 }' 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:55.768 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 [2024-10-07 07:49:55.549674] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:56.027 [2024-10-07 07:49:55.549713] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:56.027 [2024-10-07 07:49:55.549813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:56.027 [2024-10-07 07:49:55.549910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:56.027 [2024-10-07 07:49:55.549923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.027 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:56.285 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:56.543 /dev/nbd0 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:56.543 1+0 records in 00:28:56.543 1+0 records out 00:28:56.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229314 s, 17.9 MB/s 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:56.543 07:49:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:56.802 /dev/nbd1 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local i 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # break 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:56.802 1+0 records in 00:28:56.802 1+0 records out 00:28:56.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378757 s, 10.8 MB/s 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # size=4096 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # return 0 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:56.802 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.060 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:57.323 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.323 [2024-10-07 07:49:56.876167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:57.323 [2024-10-07 07:49:56.876235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.323 [2024-10-07 07:49:56.876268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:57.323 [2024-10-07 07:49:56.876283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.323 [2024-10-07 07:49:56.879356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.323 [2024-10-07 07:49:56.879403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:57.323 [2024-10-07 07:49:56.879513] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:57.323 [2024-10-07 07:49:56.879580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:57.323 [2024-10-07 07:49:56.879757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:57.593 [2024-10-07 07:49:56.879865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:57.593 [2024-10-07 07:49:56.879961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:57.593 spare 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.593 [2024-10-07 07:49:56.980074] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:57.593 [2024-10-07 07:49:56.980140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:57.593 [2024-10-07 07:49:56.980547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:28:57.593 [2024-10-07 07:49:56.988667] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:57.593 [2024-10-07 07:49:56.988697] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:57.593 [2024-10-07 07:49:56.988971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:57.593 07:49:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:57.593 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:57.593 "name": "raid_bdev1", 00:28:57.593 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:57.593 "strip_size_kb": 64, 00:28:57.593 "state": "online", 00:28:57.593 "raid_level": "raid5f", 00:28:57.593 "superblock": true, 00:28:57.593 "num_base_bdevs": 4, 00:28:57.593 "num_base_bdevs_discovered": 4, 00:28:57.593 "num_base_bdevs_operational": 4, 00:28:57.593 "base_bdevs_list": [ 00:28:57.593 { 00:28:57.593 "name": "spare", 00:28:57.593 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:57.593 "is_configured": true, 00:28:57.593 "data_offset": 2048, 00:28:57.593 "data_size": 63488 00:28:57.593 }, 00:28:57.593 { 00:28:57.593 "name": "BaseBdev2", 00:28:57.593 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:57.593 "is_configured": true, 00:28:57.594 "data_offset": 2048, 00:28:57.594 "data_size": 63488 00:28:57.594 }, 00:28:57.594 { 00:28:57.594 "name": "BaseBdev3", 00:28:57.594 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:57.594 "is_configured": true, 00:28:57.594 "data_offset": 2048, 00:28:57.594 "data_size": 63488 00:28:57.594 }, 00:28:57.594 { 00:28:57.594 "name": "BaseBdev4", 00:28:57.594 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:57.594 "is_configured": true, 00:28:57.594 "data_offset": 2048, 00:28:57.594 "data_size": 63488 00:28:57.594 } 00:28:57.594 ] 00:28:57.594 }' 00:28:57.594 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:57.594 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:58.163 "name": "raid_bdev1", 00:28:58.163 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:58.163 "strip_size_kb": 64, 00:28:58.163 "state": "online", 00:28:58.163 "raid_level": "raid5f", 00:28:58.163 "superblock": true, 00:28:58.163 "num_base_bdevs": 4, 00:28:58.163 "num_base_bdevs_discovered": 4, 00:28:58.163 "num_base_bdevs_operational": 4, 00:28:58.163 "base_bdevs_list": [ 00:28:58.163 { 00:28:58.163 "name": "spare", 00:28:58.163 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:58.163 "is_configured": true, 00:28:58.163 "data_offset": 2048, 00:28:58.163 "data_size": 63488 00:28:58.163 }, 00:28:58.163 { 00:28:58.163 "name": "BaseBdev2", 00:28:58.163 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:58.163 "is_configured": true, 00:28:58.163 "data_offset": 2048, 00:28:58.163 "data_size": 63488 00:28:58.163 }, 00:28:58.163 { 00:28:58.163 "name": "BaseBdev3", 00:28:58.163 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:58.163 "is_configured": true, 00:28:58.163 "data_offset": 2048, 00:28:58.163 "data_size": 63488 00:28:58.163 }, 00:28:58.163 { 00:28:58.163 "name": "BaseBdev4", 00:28:58.163 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:58.163 "is_configured": true, 00:28:58.163 "data_offset": 2048, 00:28:58.163 "data_size": 63488 00:28:58.163 } 00:28:58.163 ] 00:28:58.163 }' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.163 [2024-10-07 07:49:57.638102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:58.163 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:58.163 "name": "raid_bdev1", 00:28:58.163 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:58.163 "strip_size_kb": 64, 00:28:58.163 "state": "online", 00:28:58.163 "raid_level": "raid5f", 00:28:58.163 "superblock": true, 00:28:58.163 "num_base_bdevs": 4, 00:28:58.163 "num_base_bdevs_discovered": 3, 00:28:58.163 "num_base_bdevs_operational": 3, 00:28:58.164 "base_bdevs_list": [ 00:28:58.164 { 00:28:58.164 "name": null, 00:28:58.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.164 "is_configured": false, 00:28:58.164 "data_offset": 0, 00:28:58.164 "data_size": 63488 00:28:58.164 }, 00:28:58.164 { 00:28:58.164 "name": "BaseBdev2", 00:28:58.164 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:58.164 "is_configured": true, 00:28:58.164 "data_offset": 2048, 00:28:58.164 "data_size": 63488 00:28:58.164 }, 00:28:58.164 { 00:28:58.164 "name": "BaseBdev3", 00:28:58.164 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:58.164 "is_configured": true, 00:28:58.164 "data_offset": 2048, 00:28:58.164 "data_size": 63488 00:28:58.164 }, 00:28:58.164 { 00:28:58.164 "name": "BaseBdev4", 00:28:58.164 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:58.164 "is_configured": true, 00:28:58.164 "data_offset": 2048, 00:28:58.164 "data_size": 63488 00:28:58.164 } 00:28:58.164 ] 00:28:58.164 }' 00:28:58.164 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:58.164 07:49:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.731 07:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:58.731 07:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:58.731 07:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.731 [2024-10-07 07:49:58.054252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.731 [2024-10-07 07:49:58.054604] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:58.731 [2024-10-07 07:49:58.054635] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:58.731 [2024-10-07 07:49:58.054682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.731 [2024-10-07 07:49:58.071455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:28:58.731 07:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:58.731 07:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:58.731 [2024-10-07 07:49:58.082616] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:59.665 "name": "raid_bdev1", 00:28:59.665 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:59.665 "strip_size_kb": 64, 00:28:59.665 "state": "online", 00:28:59.665 "raid_level": "raid5f", 00:28:59.665 "superblock": true, 00:28:59.665 "num_base_bdevs": 4, 00:28:59.665 "num_base_bdevs_discovered": 4, 00:28:59.665 "num_base_bdevs_operational": 4, 00:28:59.665 "process": { 00:28:59.665 "type": "rebuild", 00:28:59.665 "target": "spare", 00:28:59.665 "progress": { 00:28:59.665 "blocks": 19200, 00:28:59.665 "percent": 10 00:28:59.665 } 00:28:59.665 }, 00:28:59.665 "base_bdevs_list": [ 00:28:59.665 { 00:28:59.665 "name": "spare", 00:28:59.665 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:28:59.665 "is_configured": true, 00:28:59.665 "data_offset": 2048, 00:28:59.665 "data_size": 63488 00:28:59.665 }, 00:28:59.665 { 00:28:59.665 "name": "BaseBdev2", 00:28:59.665 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:59.665 "is_configured": true, 00:28:59.665 "data_offset": 2048, 00:28:59.665 "data_size": 63488 00:28:59.665 }, 00:28:59.665 { 00:28:59.665 "name": "BaseBdev3", 00:28:59.665 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:59.665 "is_configured": true, 00:28:59.665 "data_offset": 2048, 00:28:59.665 "data_size": 63488 00:28:59.665 }, 00:28:59.665 { 00:28:59.665 "name": "BaseBdev4", 00:28:59.665 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:59.665 "is_configured": true, 00:28:59.665 "data_offset": 2048, 00:28:59.665 "data_size": 63488 00:28:59.665 } 00:28:59.665 ] 00:28:59.665 }' 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:59.665 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.924 [2024-10-07 07:49:59.228179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.924 [2024-10-07 07:49:59.292310] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:59.924 [2024-10-07 07:49:59.292417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.924 [2024-10-07 07:49:59.292449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.924 [2024-10-07 07:49:59.292481] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:59.924 "name": "raid_bdev1", 00:28:59.924 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:28:59.924 "strip_size_kb": 64, 00:28:59.924 "state": "online", 00:28:59.924 "raid_level": "raid5f", 00:28:59.924 "superblock": true, 00:28:59.924 "num_base_bdevs": 4, 00:28:59.924 "num_base_bdevs_discovered": 3, 00:28:59.924 "num_base_bdevs_operational": 3, 00:28:59.924 "base_bdevs_list": [ 00:28:59.924 { 00:28:59.924 "name": null, 00:28:59.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.924 "is_configured": false, 00:28:59.924 "data_offset": 0, 00:28:59.924 "data_size": 63488 00:28:59.924 }, 00:28:59.924 { 00:28:59.924 "name": "BaseBdev2", 00:28:59.924 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:28:59.924 "is_configured": true, 00:28:59.924 "data_offset": 2048, 00:28:59.924 "data_size": 63488 00:28:59.924 }, 00:28:59.924 { 00:28:59.924 "name": "BaseBdev3", 00:28:59.924 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:28:59.924 "is_configured": true, 00:28:59.924 "data_offset": 2048, 00:28:59.924 "data_size": 63488 00:28:59.924 }, 00:28:59.924 { 00:28:59.924 "name": "BaseBdev4", 00:28:59.924 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:28:59.924 "is_configured": true, 00:28:59.924 "data_offset": 2048, 00:28:59.924 "data_size": 63488 00:28:59.924 } 00:28:59.924 ] 00:28:59.924 }' 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:59.924 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.492 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:00.492 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:00.492 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.492 [2024-10-07 07:49:59.793148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:00.492 [2024-10-07 07:49:59.793233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.492 [2024-10-07 07:49:59.793269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:29:00.492 [2024-10-07 07:49:59.793287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.492 [2024-10-07 07:49:59.793881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.492 [2024-10-07 07:49:59.793918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:00.492 [2024-10-07 07:49:59.794028] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:00.492 [2024-10-07 07:49:59.794047] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:00.492 [2024-10-07 07:49:59.794062] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:00.492 [2024-10-07 07:49:59.794092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:00.492 [2024-10-07 07:49:59.811025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:29:00.492 spare 00:29:00.492 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:00.492 07:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:00.492 [2024-10-07 07:49:59.821607] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:01.428 "name": "raid_bdev1", 00:29:01.428 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:01.428 "strip_size_kb": 64, 00:29:01.428 "state": "online", 00:29:01.428 "raid_level": "raid5f", 00:29:01.428 "superblock": true, 00:29:01.428 "num_base_bdevs": 4, 00:29:01.428 "num_base_bdevs_discovered": 4, 00:29:01.428 "num_base_bdevs_operational": 4, 00:29:01.428 "process": { 00:29:01.428 "type": "rebuild", 00:29:01.428 "target": "spare", 00:29:01.428 "progress": { 00:29:01.428 "blocks": 19200, 00:29:01.428 "percent": 10 00:29:01.428 } 00:29:01.428 }, 00:29:01.428 "base_bdevs_list": [ 00:29:01.428 { 00:29:01.428 "name": "spare", 00:29:01.428 "uuid": "26a8ac16-58d0-53bc-8215-b2f877fc0894", 00:29:01.428 "is_configured": true, 00:29:01.428 "data_offset": 2048, 00:29:01.428 "data_size": 63488 00:29:01.428 }, 00:29:01.428 { 00:29:01.428 "name": "BaseBdev2", 00:29:01.428 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:01.428 "is_configured": true, 00:29:01.428 "data_offset": 2048, 00:29:01.428 "data_size": 63488 00:29:01.428 }, 00:29:01.428 { 00:29:01.428 "name": "BaseBdev3", 00:29:01.428 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:01.428 "is_configured": true, 00:29:01.428 "data_offset": 2048, 00:29:01.428 "data_size": 63488 00:29:01.428 }, 00:29:01.428 { 00:29:01.428 "name": "BaseBdev4", 00:29:01.428 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:01.428 "is_configured": true, 00:29:01.428 "data_offset": 2048, 00:29:01.428 "data_size": 63488 00:29:01.428 } 00:29:01.428 ] 00:29:01.428 }' 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:01.428 07:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.428 [2024-10-07 07:50:00.962937] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.687 [2024-10-07 07:50:01.032142] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:01.687 [2024-10-07 07:50:01.032207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.687 [2024-10-07 07:50:01.032230] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.687 [2024-10-07 07:50:01.032239] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:01.687 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:01.687 "name": "raid_bdev1", 00:29:01.687 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:01.688 "strip_size_kb": 64, 00:29:01.688 "state": "online", 00:29:01.688 "raid_level": "raid5f", 00:29:01.688 "superblock": true, 00:29:01.688 "num_base_bdevs": 4, 00:29:01.688 "num_base_bdevs_discovered": 3, 00:29:01.688 "num_base_bdevs_operational": 3, 00:29:01.688 "base_bdevs_list": [ 00:29:01.688 { 00:29:01.688 "name": null, 00:29:01.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.688 "is_configured": false, 00:29:01.688 "data_offset": 0, 00:29:01.688 "data_size": 63488 00:29:01.688 }, 00:29:01.688 { 00:29:01.688 "name": "BaseBdev2", 00:29:01.688 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:01.688 "is_configured": true, 00:29:01.688 "data_offset": 2048, 00:29:01.688 "data_size": 63488 00:29:01.688 }, 00:29:01.688 { 00:29:01.688 "name": "BaseBdev3", 00:29:01.688 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:01.688 "is_configured": true, 00:29:01.688 "data_offset": 2048, 00:29:01.688 "data_size": 63488 00:29:01.688 }, 00:29:01.688 { 00:29:01.688 "name": "BaseBdev4", 00:29:01.688 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:01.688 "is_configured": true, 00:29:01.688 "data_offset": 2048, 00:29:01.688 "data_size": 63488 00:29:01.688 } 00:29:01.688 ] 00:29:01.688 }' 00:29:01.688 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:01.688 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.947 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:02.206 "name": "raid_bdev1", 00:29:02.206 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:02.206 "strip_size_kb": 64, 00:29:02.206 "state": "online", 00:29:02.206 "raid_level": "raid5f", 00:29:02.206 "superblock": true, 00:29:02.206 "num_base_bdevs": 4, 00:29:02.206 "num_base_bdevs_discovered": 3, 00:29:02.206 "num_base_bdevs_operational": 3, 00:29:02.206 "base_bdevs_list": [ 00:29:02.206 { 00:29:02.206 "name": null, 00:29:02.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.206 "is_configured": false, 00:29:02.206 "data_offset": 0, 00:29:02.206 "data_size": 63488 00:29:02.206 }, 00:29:02.206 { 00:29:02.206 "name": "BaseBdev2", 00:29:02.206 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:02.206 "is_configured": true, 00:29:02.206 "data_offset": 2048, 00:29:02.206 "data_size": 63488 00:29:02.206 }, 00:29:02.206 { 00:29:02.206 "name": "BaseBdev3", 00:29:02.206 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:02.206 "is_configured": true, 00:29:02.206 "data_offset": 2048, 00:29:02.206 "data_size": 63488 00:29:02.206 }, 00:29:02.206 { 00:29:02.206 "name": "BaseBdev4", 00:29:02.206 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:02.206 "is_configured": true, 00:29:02.206 "data_offset": 2048, 00:29:02.206 "data_size": 63488 00:29:02.206 } 00:29:02.206 ] 00:29:02.206 }' 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:02.206 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.206 [2024-10-07 07:50:01.631636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:02.206 [2024-10-07 07:50:01.631847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.206 [2024-10-07 07:50:01.631889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:29:02.207 [2024-10-07 07:50:01.631903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.207 [2024-10-07 07:50:01.632444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.207 [2024-10-07 07:50:01.632469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:02.207 [2024-10-07 07:50:01.632558] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:02.207 [2024-10-07 07:50:01.632576] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:02.207 [2024-10-07 07:50:01.632591] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:02.207 [2024-10-07 07:50:01.632604] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:02.207 BaseBdev1 00:29:02.207 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:02.207 07:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:03.179 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:03.179 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.179 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.179 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.180 "name": "raid_bdev1", 00:29:03.180 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:03.180 "strip_size_kb": 64, 00:29:03.180 "state": "online", 00:29:03.180 "raid_level": "raid5f", 00:29:03.180 "superblock": true, 00:29:03.180 "num_base_bdevs": 4, 00:29:03.180 "num_base_bdevs_discovered": 3, 00:29:03.180 "num_base_bdevs_operational": 3, 00:29:03.180 "base_bdevs_list": [ 00:29:03.180 { 00:29:03.180 "name": null, 00:29:03.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.180 "is_configured": false, 00:29:03.180 "data_offset": 0, 00:29:03.180 "data_size": 63488 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "BaseBdev2", 00:29:03.180 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:03.180 "is_configured": true, 00:29:03.180 "data_offset": 2048, 00:29:03.180 "data_size": 63488 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "BaseBdev3", 00:29:03.180 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:03.180 "is_configured": true, 00:29:03.180 "data_offset": 2048, 00:29:03.180 "data_size": 63488 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "BaseBdev4", 00:29:03.180 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:03.180 "is_configured": true, 00:29:03.180 "data_offset": 2048, 00:29:03.180 "data_size": 63488 00:29:03.180 } 00:29:03.180 ] 00:29:03.180 }' 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.180 07:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:03.750 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:03.750 "name": "raid_bdev1", 00:29:03.750 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:03.750 "strip_size_kb": 64, 00:29:03.750 "state": "online", 00:29:03.750 "raid_level": "raid5f", 00:29:03.750 "superblock": true, 00:29:03.750 "num_base_bdevs": 4, 00:29:03.750 "num_base_bdevs_discovered": 3, 00:29:03.750 "num_base_bdevs_operational": 3, 00:29:03.750 "base_bdevs_list": [ 00:29:03.750 { 00:29:03.750 "name": null, 00:29:03.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.750 "is_configured": false, 00:29:03.750 "data_offset": 0, 00:29:03.750 "data_size": 63488 00:29:03.750 }, 00:29:03.750 { 00:29:03.750 "name": "BaseBdev2", 00:29:03.750 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:03.750 "is_configured": true, 00:29:03.750 "data_offset": 2048, 00:29:03.750 "data_size": 63488 00:29:03.750 }, 00:29:03.750 { 00:29:03.750 "name": "BaseBdev3", 00:29:03.750 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:03.750 "is_configured": true, 00:29:03.750 "data_offset": 2048, 00:29:03.750 "data_size": 63488 00:29:03.750 }, 00:29:03.750 { 00:29:03.750 "name": "BaseBdev4", 00:29:03.750 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:03.750 "is_configured": true, 00:29:03.750 "data_offset": 2048, 00:29:03.750 "data_size": 63488 00:29:03.750 } 00:29:03.750 ] 00:29:03.750 }' 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # local es=0 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.751 [2024-10-07 07:50:03.252110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:03.751 [2024-10-07 07:50:03.252284] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:03.751 [2024-10-07 07:50:03.252308] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:03.751 request: 00:29:03.751 { 00:29:03.751 "base_bdev": "BaseBdev1", 00:29:03.751 "raid_bdev": "raid_bdev1", 00:29:03.751 "method": "bdev_raid_add_base_bdev", 00:29:03.751 "req_id": 1 00:29:03.751 } 00:29:03.751 Got JSON-RPC error response 00:29:03.751 response: 00:29:03.751 { 00:29:03.751 "code": -22, 00:29:03.751 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:03.751 } 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@656 -- # es=1 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:03.751 07:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:05.129 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.130 "name": "raid_bdev1", 00:29:05.130 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:05.130 "strip_size_kb": 64, 00:29:05.130 "state": "online", 00:29:05.130 "raid_level": "raid5f", 00:29:05.130 "superblock": true, 00:29:05.130 "num_base_bdevs": 4, 00:29:05.130 "num_base_bdevs_discovered": 3, 00:29:05.130 "num_base_bdevs_operational": 3, 00:29:05.130 "base_bdevs_list": [ 00:29:05.130 { 00:29:05.130 "name": null, 00:29:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.130 "is_configured": false, 00:29:05.130 "data_offset": 0, 00:29:05.130 "data_size": 63488 00:29:05.130 }, 00:29:05.130 { 00:29:05.130 "name": "BaseBdev2", 00:29:05.130 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:05.130 "is_configured": true, 00:29:05.130 "data_offset": 2048, 00:29:05.130 "data_size": 63488 00:29:05.130 }, 00:29:05.130 { 00:29:05.130 "name": "BaseBdev3", 00:29:05.130 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:05.130 "is_configured": true, 00:29:05.130 "data_offset": 2048, 00:29:05.130 "data_size": 63488 00:29:05.130 }, 00:29:05.130 { 00:29:05.130 "name": "BaseBdev4", 00:29:05.130 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:05.130 "is_configured": true, 00:29:05.130 "data_offset": 2048, 00:29:05.130 "data_size": 63488 00:29:05.130 } 00:29:05.130 ] 00:29:05.130 }' 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.130 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:05.389 "name": "raid_bdev1", 00:29:05.389 "uuid": "e928c72e-7f65-4031-a5a2-1d520a14fdfe", 00:29:05.389 "strip_size_kb": 64, 00:29:05.389 "state": "online", 00:29:05.389 "raid_level": "raid5f", 00:29:05.389 "superblock": true, 00:29:05.389 "num_base_bdevs": 4, 00:29:05.389 "num_base_bdevs_discovered": 3, 00:29:05.389 "num_base_bdevs_operational": 3, 00:29:05.389 "base_bdevs_list": [ 00:29:05.389 { 00:29:05.389 "name": null, 00:29:05.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.389 "is_configured": false, 00:29:05.389 "data_offset": 0, 00:29:05.389 "data_size": 63488 00:29:05.389 }, 00:29:05.389 { 00:29:05.389 "name": "BaseBdev2", 00:29:05.389 "uuid": "40bb331d-ba02-5fbf-99bc-fad83e004442", 00:29:05.389 "is_configured": true, 00:29:05.389 "data_offset": 2048, 00:29:05.389 "data_size": 63488 00:29:05.389 }, 00:29:05.389 { 00:29:05.389 "name": "BaseBdev3", 00:29:05.389 "uuid": "d2cc5a1a-b805-5731-bf8f-2b9786869fc1", 00:29:05.389 "is_configured": true, 00:29:05.389 "data_offset": 2048, 00:29:05.389 "data_size": 63488 00:29:05.389 }, 00:29:05.389 { 00:29:05.389 "name": "BaseBdev4", 00:29:05.389 "uuid": "9c58e040-2747-592a-b155-e8a8f37e07c5", 00:29:05.389 "is_configured": true, 00:29:05.389 "data_offset": 2048, 00:29:05.389 "data_size": 63488 00:29:05.389 } 00:29:05.389 ] 00:29:05.389 }' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85378 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' -z 85378 ']' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # kill -0 85378 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # uname 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 85378 00:29:05.389 killing process with pid 85378 00:29:05.389 Received shutdown signal, test time was about 60.000000 seconds 00:29:05.389 00:29:05.389 Latency(us) 00:29:05.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.389 =================================================================================================================== 00:29:05.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # echo 'killing process with pid 85378' 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # kill 85378 00:29:05.389 [2024-10-07 07:50:04.919152] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:05.389 07:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@977 -- # wait 85378 00:29:05.389 [2024-10-07 07:50:04.919291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:05.389 [2024-10-07 07:50:04.919377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:05.389 [2024-10-07 07:50:04.919394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:05.957 [2024-10-07 07:50:05.468026] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:07.337 ************************************ 00:29:07.337 END TEST raid5f_rebuild_test_sb 00:29:07.337 ************************************ 00:29:07.337 07:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:29:07.337 00:29:07.337 real 0m27.719s 00:29:07.337 user 0m34.818s 00:29:07.337 sys 0m3.153s 00:29:07.337 07:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:07.337 07:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 07:50:06 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:29:07.597 07:50:06 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:29:07.597 07:50:06 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:29:07.597 07:50:06 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:07.597 07:50:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 ************************************ 00:29:07.597 START TEST raid_state_function_test_sb_4k 00:29:07.597 ************************************ 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 2 true 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.597 Process raid pid: 86196 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86196 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86196' 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86196 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # '[' -z 86196 ']' 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:07.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:07.597 07:50:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 [2024-10-07 07:50:07.002369] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:07.597 [2024-10-07 07:50:07.003314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.857 [2024-10-07 07:50:07.169930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.857 [2024-10-07 07:50:07.402191] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.116 [2024-10-07 07:50:07.630854] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:08.116 [2024-10-07 07:50:07.631052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@867 -- # return 0 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.375 [2024-10-07 07:50:07.898196] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:08.375 [2024-10-07 07:50:07.898551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:08.375 [2024-10-07 07:50:07.898679] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:08.375 [2024-10-07 07:50:07.898838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.375 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.635 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.635 "name": "Existed_Raid", 00:29:08.635 "uuid": "4d91dffa-83bf-4efc-b3fe-3b5271b61124", 00:29:08.635 "strip_size_kb": 0, 00:29:08.635 "state": "configuring", 00:29:08.635 "raid_level": "raid1", 00:29:08.635 "superblock": true, 00:29:08.635 "num_base_bdevs": 2, 00:29:08.635 "num_base_bdevs_discovered": 0, 00:29:08.635 "num_base_bdevs_operational": 2, 00:29:08.635 "base_bdevs_list": [ 00:29:08.635 { 00:29:08.635 "name": "BaseBdev1", 00:29:08.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.635 "is_configured": false, 00:29:08.635 "data_offset": 0, 00:29:08.635 "data_size": 0 00:29:08.635 }, 00:29:08.635 { 00:29:08.635 "name": "BaseBdev2", 00:29:08.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.635 "is_configured": false, 00:29:08.635 "data_offset": 0, 00:29:08.635 "data_size": 0 00:29:08.635 } 00:29:08.635 ] 00:29:08.635 }' 00:29:08.635 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.635 07:50:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.895 [2024-10-07 07:50:08.334194] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:08.895 [2024-10-07 07:50:08.334234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.895 [2024-10-07 07:50:08.342223] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:08.895 [2024-10-07 07:50:08.342834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:08.895 [2024-10-07 07:50:08.342864] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:08.895 [2024-10-07 07:50:08.342977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.895 [2024-10-07 07:50:08.407006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:08.895 BaseBdev1 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:08.895 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local i 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:08.896 [ 00:29:08.896 { 00:29:08.896 "name": "BaseBdev1", 00:29:08.896 "aliases": [ 00:29:08.896 "ee09df7f-c691-4887-9b93-1460b62f97be" 00:29:08.896 ], 00:29:08.896 "product_name": "Malloc disk", 00:29:08.896 "block_size": 4096, 00:29:08.896 "num_blocks": 8192, 00:29:08.896 "uuid": "ee09df7f-c691-4887-9b93-1460b62f97be", 00:29:08.896 "assigned_rate_limits": { 00:29:08.896 "rw_ios_per_sec": 0, 00:29:08.896 "rw_mbytes_per_sec": 0, 00:29:08.896 "r_mbytes_per_sec": 0, 00:29:08.896 "w_mbytes_per_sec": 0 00:29:08.896 }, 00:29:08.896 "claimed": true, 00:29:08.896 "claim_type": "exclusive_write", 00:29:08.896 "zoned": false, 00:29:08.896 "supported_io_types": { 00:29:08.896 "read": true, 00:29:08.896 "write": true, 00:29:08.896 "unmap": true, 00:29:08.896 "flush": true, 00:29:08.896 "reset": true, 00:29:08.896 "nvme_admin": false, 00:29:08.896 "nvme_io": false, 00:29:08.896 "nvme_io_md": false, 00:29:08.896 "write_zeroes": true, 00:29:08.896 "zcopy": true, 00:29:08.896 "get_zone_info": false, 00:29:08.896 "zone_management": false, 00:29:08.896 "zone_append": false, 00:29:08.896 "compare": false, 00:29:08.896 "compare_and_write": false, 00:29:08.896 "abort": true, 00:29:08.896 "seek_hole": false, 00:29:08.896 "seek_data": false, 00:29:08.896 "copy": true, 00:29:08.896 "nvme_iov_md": false 00:29:08.896 }, 00:29:08.896 "memory_domains": [ 00:29:08.896 { 00:29:08.896 "dma_device_id": "system", 00:29:08.896 "dma_device_type": 1 00:29:08.896 }, 00:29:08.896 { 00:29:08.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.896 "dma_device_type": 2 00:29:08.896 } 00:29:08.896 ], 00:29:08.896 "driver_specific": {} 00:29:08.896 } 00:29:08.896 ] 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # return 0 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:08.896 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:09.155 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:09.155 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.155 "name": "Existed_Raid", 00:29:09.155 "uuid": "20717fdd-d27f-453d-a638-63af4e6c8abb", 00:29:09.155 "strip_size_kb": 0, 00:29:09.155 "state": "configuring", 00:29:09.155 "raid_level": "raid1", 00:29:09.155 "superblock": true, 00:29:09.155 "num_base_bdevs": 2, 00:29:09.155 "num_base_bdevs_discovered": 1, 00:29:09.155 "num_base_bdevs_operational": 2, 00:29:09.155 "base_bdevs_list": [ 00:29:09.155 { 00:29:09.155 "name": "BaseBdev1", 00:29:09.155 "uuid": "ee09df7f-c691-4887-9b93-1460b62f97be", 00:29:09.155 "is_configured": true, 00:29:09.155 "data_offset": 256, 00:29:09.155 "data_size": 7936 00:29:09.155 }, 00:29:09.155 { 00:29:09.155 "name": "BaseBdev2", 00:29:09.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.155 "is_configured": false, 00:29:09.155 "data_offset": 0, 00:29:09.155 "data_size": 0 00:29:09.155 } 00:29:09.155 ] 00:29:09.155 }' 00:29:09.155 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.155 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:09.468 [2024-10-07 07:50:08.883180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:09.468 [2024-10-07 07:50:08.883237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:09.468 [2024-10-07 07:50:08.891217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:09.468 [2024-10-07 07:50:08.893644] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:09.468 [2024-10-07 07:50:08.894013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.468 "name": "Existed_Raid", 00:29:09.468 "uuid": "6e1c56db-1d3f-4ea1-8792-1b99f72b1460", 00:29:09.468 "strip_size_kb": 0, 00:29:09.468 "state": "configuring", 00:29:09.468 "raid_level": "raid1", 00:29:09.468 "superblock": true, 00:29:09.468 "num_base_bdevs": 2, 00:29:09.468 "num_base_bdevs_discovered": 1, 00:29:09.468 "num_base_bdevs_operational": 2, 00:29:09.468 "base_bdevs_list": [ 00:29:09.468 { 00:29:09.468 "name": "BaseBdev1", 00:29:09.468 "uuid": "ee09df7f-c691-4887-9b93-1460b62f97be", 00:29:09.468 "is_configured": true, 00:29:09.468 "data_offset": 256, 00:29:09.468 "data_size": 7936 00:29:09.468 }, 00:29:09.468 { 00:29:09.468 "name": "BaseBdev2", 00:29:09.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.468 "is_configured": false, 00:29:09.468 "data_offset": 0, 00:29:09.468 "data_size": 0 00:29:09.468 } 00:29:09.468 ] 00:29:09.468 }' 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.468 07:50:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.036 [2024-10-07 07:50:09.366513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:10.036 [2024-10-07 07:50:09.366998] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:10.036 [2024-10-07 07:50:09.367126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:10.036 [2024-10-07 07:50:09.367449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:10.036 BaseBdev2 00:29:10.036 [2024-10-07 07:50:09.367721] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:10.036 [2024-10-07 07:50:09.367740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:10.036 [2024-10-07 07:50:09.367885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local i 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.036 [ 00:29:10.036 { 00:29:10.036 "name": "BaseBdev2", 00:29:10.036 "aliases": [ 00:29:10.036 "3535d9b1-47f6-4879-ba9c-bb8fe0cb15f2" 00:29:10.036 ], 00:29:10.036 "product_name": "Malloc disk", 00:29:10.036 "block_size": 4096, 00:29:10.036 "num_blocks": 8192, 00:29:10.036 "uuid": "3535d9b1-47f6-4879-ba9c-bb8fe0cb15f2", 00:29:10.036 "assigned_rate_limits": { 00:29:10.036 "rw_ios_per_sec": 0, 00:29:10.036 "rw_mbytes_per_sec": 0, 00:29:10.036 "r_mbytes_per_sec": 0, 00:29:10.036 "w_mbytes_per_sec": 0 00:29:10.036 }, 00:29:10.036 "claimed": true, 00:29:10.036 "claim_type": "exclusive_write", 00:29:10.036 "zoned": false, 00:29:10.036 "supported_io_types": { 00:29:10.036 "read": true, 00:29:10.036 "write": true, 00:29:10.036 "unmap": true, 00:29:10.036 "flush": true, 00:29:10.036 "reset": true, 00:29:10.036 "nvme_admin": false, 00:29:10.036 "nvme_io": false, 00:29:10.036 "nvme_io_md": false, 00:29:10.036 "write_zeroes": true, 00:29:10.036 "zcopy": true, 00:29:10.036 "get_zone_info": false, 00:29:10.036 "zone_management": false, 00:29:10.036 "zone_append": false, 00:29:10.036 "compare": false, 00:29:10.036 "compare_and_write": false, 00:29:10.036 "abort": true, 00:29:10.036 "seek_hole": false, 00:29:10.036 "seek_data": false, 00:29:10.036 "copy": true, 00:29:10.036 "nvme_iov_md": false 00:29:10.036 }, 00:29:10.036 "memory_domains": [ 00:29:10.036 { 00:29:10.036 "dma_device_id": "system", 00:29:10.036 "dma_device_type": 1 00:29:10.036 }, 00:29:10.036 { 00:29:10.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.036 "dma_device_type": 2 00:29:10.036 } 00:29:10.036 ], 00:29:10.036 "driver_specific": {} 00:29:10.036 } 00:29:10.036 ] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # return 0 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:10.036 "name": "Existed_Raid", 00:29:10.036 "uuid": "6e1c56db-1d3f-4ea1-8792-1b99f72b1460", 00:29:10.036 "strip_size_kb": 0, 00:29:10.036 "state": "online", 00:29:10.036 "raid_level": "raid1", 00:29:10.036 "superblock": true, 00:29:10.036 "num_base_bdevs": 2, 00:29:10.036 "num_base_bdevs_discovered": 2, 00:29:10.036 "num_base_bdevs_operational": 2, 00:29:10.036 "base_bdevs_list": [ 00:29:10.036 { 00:29:10.036 "name": "BaseBdev1", 00:29:10.036 "uuid": "ee09df7f-c691-4887-9b93-1460b62f97be", 00:29:10.036 "is_configured": true, 00:29:10.036 "data_offset": 256, 00:29:10.036 "data_size": 7936 00:29:10.036 }, 00:29:10.036 { 00:29:10.036 "name": "BaseBdev2", 00:29:10.036 "uuid": "3535d9b1-47f6-4879-ba9c-bb8fe0cb15f2", 00:29:10.036 "is_configured": true, 00:29:10.036 "data_offset": 256, 00:29:10.036 "data_size": 7936 00:29:10.036 } 00:29:10.036 ] 00:29:10.036 }' 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:10.036 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.295 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:10.295 [2024-10-07 07:50:09.834984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.554 "name": "Existed_Raid", 00:29:10.554 "aliases": [ 00:29:10.554 "6e1c56db-1d3f-4ea1-8792-1b99f72b1460" 00:29:10.554 ], 00:29:10.554 "product_name": "Raid Volume", 00:29:10.554 "block_size": 4096, 00:29:10.554 "num_blocks": 7936, 00:29:10.554 "uuid": "6e1c56db-1d3f-4ea1-8792-1b99f72b1460", 00:29:10.554 "assigned_rate_limits": { 00:29:10.554 "rw_ios_per_sec": 0, 00:29:10.554 "rw_mbytes_per_sec": 0, 00:29:10.554 "r_mbytes_per_sec": 0, 00:29:10.554 "w_mbytes_per_sec": 0 00:29:10.554 }, 00:29:10.554 "claimed": false, 00:29:10.554 "zoned": false, 00:29:10.554 "supported_io_types": { 00:29:10.554 "read": true, 00:29:10.554 "write": true, 00:29:10.554 "unmap": false, 00:29:10.554 "flush": false, 00:29:10.554 "reset": true, 00:29:10.554 "nvme_admin": false, 00:29:10.554 "nvme_io": false, 00:29:10.554 "nvme_io_md": false, 00:29:10.554 "write_zeroes": true, 00:29:10.554 "zcopy": false, 00:29:10.554 "get_zone_info": false, 00:29:10.554 "zone_management": false, 00:29:10.554 "zone_append": false, 00:29:10.554 "compare": false, 00:29:10.554 "compare_and_write": false, 00:29:10.554 "abort": false, 00:29:10.554 "seek_hole": false, 00:29:10.554 "seek_data": false, 00:29:10.554 "copy": false, 00:29:10.554 "nvme_iov_md": false 00:29:10.554 }, 00:29:10.554 "memory_domains": [ 00:29:10.554 { 00:29:10.554 "dma_device_id": "system", 00:29:10.554 "dma_device_type": 1 00:29:10.554 }, 00:29:10.554 { 00:29:10.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.554 "dma_device_type": 2 00:29:10.554 }, 00:29:10.554 { 00:29:10.554 "dma_device_id": "system", 00:29:10.554 "dma_device_type": 1 00:29:10.554 }, 00:29:10.554 { 00:29:10.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.554 "dma_device_type": 2 00:29:10.554 } 00:29:10.554 ], 00:29:10.554 "driver_specific": { 00:29:10.554 "raid": { 00:29:10.554 "uuid": "6e1c56db-1d3f-4ea1-8792-1b99f72b1460", 00:29:10.554 "strip_size_kb": 0, 00:29:10.554 "state": "online", 00:29:10.554 "raid_level": "raid1", 00:29:10.554 "superblock": true, 00:29:10.554 "num_base_bdevs": 2, 00:29:10.554 "num_base_bdevs_discovered": 2, 00:29:10.554 "num_base_bdevs_operational": 2, 00:29:10.554 "base_bdevs_list": [ 00:29:10.554 { 00:29:10.554 "name": "BaseBdev1", 00:29:10.554 "uuid": "ee09df7f-c691-4887-9b93-1460b62f97be", 00:29:10.554 "is_configured": true, 00:29:10.554 "data_offset": 256, 00:29:10.554 "data_size": 7936 00:29:10.554 }, 00:29:10.554 { 00:29:10.554 "name": "BaseBdev2", 00:29:10.554 "uuid": "3535d9b1-47f6-4879-ba9c-bb8fe0cb15f2", 00:29:10.554 "is_configured": true, 00:29:10.554 "data_offset": 256, 00:29:10.554 "data_size": 7936 00:29:10.554 } 00:29:10.554 ] 00:29:10.554 } 00:29:10.554 } 00:29:10.554 }' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:10.554 BaseBdev2' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.554 07:50:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.554 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.554 [2024-10-07 07:50:10.042805] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:10.813 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:10.814 "name": "Existed_Raid", 00:29:10.814 "uuid": "6e1c56db-1d3f-4ea1-8792-1b99f72b1460", 00:29:10.814 "strip_size_kb": 0, 00:29:10.814 "state": "online", 00:29:10.814 "raid_level": "raid1", 00:29:10.814 "superblock": true, 00:29:10.814 "num_base_bdevs": 2, 00:29:10.814 "num_base_bdevs_discovered": 1, 00:29:10.814 "num_base_bdevs_operational": 1, 00:29:10.814 "base_bdevs_list": [ 00:29:10.814 { 00:29:10.814 "name": null, 00:29:10.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.814 "is_configured": false, 00:29:10.814 "data_offset": 0, 00:29:10.814 "data_size": 7936 00:29:10.814 }, 00:29:10.814 { 00:29:10.814 "name": "BaseBdev2", 00:29:10.814 "uuid": "3535d9b1-47f6-4879-ba9c-bb8fe0cb15f2", 00:29:10.814 "is_configured": true, 00:29:10.814 "data_offset": 256, 00:29:10.814 "data_size": 7936 00:29:10.814 } 00:29:10.814 ] 00:29:10.814 }' 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:10.814 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:11.073 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:11.332 [2024-10-07 07:50:10.649429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:11.332 [2024-10-07 07:50:10.649539] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:11.332 [2024-10-07 07:50:10.749760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:11.332 [2024-10-07 07:50:10.749817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:11.332 [2024-10-07 07:50:10.749832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86196 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' -z 86196 ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # kill -0 86196 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # uname 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 86196 00:29:11.332 killing process with pid 86196 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # echo 'killing process with pid 86196' 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # kill 86196 00:29:11.332 [2024-10-07 07:50:10.838866] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:11.332 07:50:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@977 -- # wait 86196 00:29:11.333 [2024-10-07 07:50:10.857547] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:12.712 07:50:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:29:12.712 ************************************ 00:29:12.712 END TEST raid_state_function_test_sb_4k 00:29:12.712 ************************************ 00:29:12.712 00:29:12.712 real 0m5.270s 00:29:12.712 user 0m7.483s 00:29:12.712 sys 0m0.879s 00:29:12.712 07:50:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:12.712 07:50:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:12.712 07:50:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:29:12.712 07:50:12 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:29:12.712 07:50:12 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:12.712 07:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:12.712 ************************************ 00:29:12.712 START TEST raid_superblock_test_4k 00:29:12.712 ************************************ 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 2 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86445 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86445 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # '[' -z 86445 ']' 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:12.712 07:50:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:12.971 [2024-10-07 07:50:12.383617] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:12.971 [2024-10-07 07:50:12.383817] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86445 ] 00:29:13.229 [2024-10-07 07:50:12.555818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.229 [2024-10-07 07:50:12.777964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.489 [2024-10-07 07:50:12.989826] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:13.489 [2024-10-07 07:50:12.989887] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@867 -- # return 0 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 malloc1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 [2024-10-07 07:50:13.366567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:14.060 [2024-10-07 07:50:13.366809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.060 [2024-10-07 07:50:13.366934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:14.060 [2024-10-07 07:50:13.367032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.060 [2024-10-07 07:50:13.369702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.060 [2024-10-07 07:50:13.369763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:14.060 pt1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 malloc2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 [2024-10-07 07:50:13.433510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:14.060 [2024-10-07 07:50:13.433792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.060 [2024-10-07 07:50:13.433844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:14.060 [2024-10-07 07:50:13.433865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.060 [2024-10-07 07:50:13.437450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.060 pt2 00:29:14.060 [2024-10-07 07:50:13.437639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 [2024-10-07 07:50:13.445961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:14.060 [2024-10-07 07:50:13.448230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:14.060 [2024-10-07 07:50:13.448584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:14.060 [2024-10-07 07:50:13.448606] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:14.060 [2024-10-07 07:50:13.448925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:14.060 [2024-10-07 07:50:13.449125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:14.060 [2024-10-07 07:50:13.449143] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:14.060 [2024-10-07 07:50:13.449338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.060 "name": "raid_bdev1", 00:29:14.060 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:14.060 "strip_size_kb": 0, 00:29:14.060 "state": "online", 00:29:14.060 "raid_level": "raid1", 00:29:14.060 "superblock": true, 00:29:14.060 "num_base_bdevs": 2, 00:29:14.060 "num_base_bdevs_discovered": 2, 00:29:14.060 "num_base_bdevs_operational": 2, 00:29:14.060 "base_bdevs_list": [ 00:29:14.060 { 00:29:14.060 "name": "pt1", 00:29:14.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:14.060 "is_configured": true, 00:29:14.060 "data_offset": 256, 00:29:14.060 "data_size": 7936 00:29:14.060 }, 00:29:14.060 { 00:29:14.060 "name": "pt2", 00:29:14.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:14.060 "is_configured": true, 00:29:14.060 "data_offset": 256, 00:29:14.060 "data_size": 7936 00:29:14.060 } 00:29:14.060 ] 00:29:14.060 }' 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.060 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.320 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:14.320 [2024-10-07 07:50:13.878375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:14.579 07:50:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.579 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:14.579 "name": "raid_bdev1", 00:29:14.579 "aliases": [ 00:29:14.579 "0b6fada7-1555-4e78-a456-760d24926fc1" 00:29:14.579 ], 00:29:14.579 "product_name": "Raid Volume", 00:29:14.579 "block_size": 4096, 00:29:14.579 "num_blocks": 7936, 00:29:14.579 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:14.579 "assigned_rate_limits": { 00:29:14.579 "rw_ios_per_sec": 0, 00:29:14.579 "rw_mbytes_per_sec": 0, 00:29:14.579 "r_mbytes_per_sec": 0, 00:29:14.579 "w_mbytes_per_sec": 0 00:29:14.579 }, 00:29:14.579 "claimed": false, 00:29:14.579 "zoned": false, 00:29:14.579 "supported_io_types": { 00:29:14.579 "read": true, 00:29:14.579 "write": true, 00:29:14.579 "unmap": false, 00:29:14.579 "flush": false, 00:29:14.579 "reset": true, 00:29:14.579 "nvme_admin": false, 00:29:14.579 "nvme_io": false, 00:29:14.579 "nvme_io_md": false, 00:29:14.579 "write_zeroes": true, 00:29:14.579 "zcopy": false, 00:29:14.579 "get_zone_info": false, 00:29:14.579 "zone_management": false, 00:29:14.579 "zone_append": false, 00:29:14.579 "compare": false, 00:29:14.579 "compare_and_write": false, 00:29:14.579 "abort": false, 00:29:14.579 "seek_hole": false, 00:29:14.579 "seek_data": false, 00:29:14.579 "copy": false, 00:29:14.579 "nvme_iov_md": false 00:29:14.579 }, 00:29:14.579 "memory_domains": [ 00:29:14.579 { 00:29:14.579 "dma_device_id": "system", 00:29:14.579 "dma_device_type": 1 00:29:14.579 }, 00:29:14.579 { 00:29:14.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:14.579 "dma_device_type": 2 00:29:14.579 }, 00:29:14.579 { 00:29:14.579 "dma_device_id": "system", 00:29:14.579 "dma_device_type": 1 00:29:14.579 }, 00:29:14.579 { 00:29:14.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:14.579 "dma_device_type": 2 00:29:14.579 } 00:29:14.579 ], 00:29:14.579 "driver_specific": { 00:29:14.579 "raid": { 00:29:14.579 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:14.579 "strip_size_kb": 0, 00:29:14.579 "state": "online", 00:29:14.579 "raid_level": "raid1", 00:29:14.579 "superblock": true, 00:29:14.579 "num_base_bdevs": 2, 00:29:14.579 "num_base_bdevs_discovered": 2, 00:29:14.579 "num_base_bdevs_operational": 2, 00:29:14.579 "base_bdevs_list": [ 00:29:14.579 { 00:29:14.579 "name": "pt1", 00:29:14.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:14.579 "is_configured": true, 00:29:14.579 "data_offset": 256, 00:29:14.579 "data_size": 7936 00:29:14.579 }, 00:29:14.579 { 00:29:14.579 "name": "pt2", 00:29:14.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:14.579 "is_configured": true, 00:29:14.579 "data_offset": 256, 00:29:14.579 "data_size": 7936 00:29:14.579 } 00:29:14.579 ] 00:29:14.579 } 00:29:14.579 } 00:29:14.579 }' 00:29:14.579 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:14.580 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:14.580 pt2' 00:29:14.580 07:50:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.580 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.580 [2024-10-07 07:50:14.126323] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0b6fada7-1555-4e78-a456-760d24926fc1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 0b6fada7-1555-4e78-a456-760d24926fc1 ']' 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [2024-10-07 07:50:14.158062] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:14.839 [2024-10-07 07:50:14.158092] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.839 [2024-10-07 07:50:14.158186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.839 [2024-10-07 07:50:14.158250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:14.839 [2024-10-07 07:50:14.158265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # local es=0 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.839 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [2024-10-07 07:50:14.282105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:14.839 [2024-10-07 07:50:14.284462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:14.839 [2024-10-07 07:50:14.284698] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:14.839 [2024-10-07 07:50:14.284784] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:14.839 [2024-10-07 07:50:14.284805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:14.839 [2024-10-07 07:50:14.284820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:14.839 request: 00:29:14.839 { 00:29:14.839 "name": "raid_bdev1", 00:29:14.839 "raid_level": "raid1", 00:29:14.839 "base_bdevs": [ 00:29:14.839 "malloc1", 00:29:14.839 "malloc2" 00:29:14.839 ], 00:29:14.839 "superblock": false, 00:29:14.839 "method": "bdev_raid_create", 00:29:14.840 "req_id": 1 00:29:14.840 } 00:29:14.840 Got JSON-RPC error response 00:29:14.840 response: 00:29:14.840 { 00:29:14.840 "code": -17, 00:29:14.840 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:14.840 } 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@656 -- # es=1 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.840 [2024-10-07 07:50:14.338094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:14.840 [2024-10-07 07:50:14.338315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.840 [2024-10-07 07:50:14.338349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:14.840 [2024-10-07 07:50:14.338365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.840 [2024-10-07 07:50:14.341037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.840 [2024-10-07 07:50:14.341085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:14.840 [2024-10-07 07:50:14.341181] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:14.840 [2024-10-07 07:50:14.341250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:14.840 pt1 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.840 "name": "raid_bdev1", 00:29:14.840 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:14.840 "strip_size_kb": 0, 00:29:14.840 "state": "configuring", 00:29:14.840 "raid_level": "raid1", 00:29:14.840 "superblock": true, 00:29:14.840 "num_base_bdevs": 2, 00:29:14.840 "num_base_bdevs_discovered": 1, 00:29:14.840 "num_base_bdevs_operational": 2, 00:29:14.840 "base_bdevs_list": [ 00:29:14.840 { 00:29:14.840 "name": "pt1", 00:29:14.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:14.840 "is_configured": true, 00:29:14.840 "data_offset": 256, 00:29:14.840 "data_size": 7936 00:29:14.840 }, 00:29:14.840 { 00:29:14.840 "name": null, 00:29:14.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:14.840 "is_configured": false, 00:29:14.840 "data_offset": 256, 00:29:14.840 "data_size": 7936 00:29:14.840 } 00:29:14.840 ] 00:29:14.840 }' 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.840 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.407 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.407 [2024-10-07 07:50:14.782193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:15.407 [2024-10-07 07:50:14.782415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.407 [2024-10-07 07:50:14.782448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:15.407 [2024-10-07 07:50:14.782463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.407 [2024-10-07 07:50:14.783023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.407 [2024-10-07 07:50:14.783050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:15.407 [2024-10-07 07:50:14.783135] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:15.407 [2024-10-07 07:50:14.783161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:15.407 [2024-10-07 07:50:14.783284] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:15.407 [2024-10-07 07:50:14.783298] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:15.407 [2024-10-07 07:50:14.783590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:15.407 [2024-10-07 07:50:14.783787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:15.407 [2024-10-07 07:50:14.783807] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:15.407 [2024-10-07 07:50:14.783976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.407 pt2 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:15.408 "name": "raid_bdev1", 00:29:15.408 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:15.408 "strip_size_kb": 0, 00:29:15.408 "state": "online", 00:29:15.408 "raid_level": "raid1", 00:29:15.408 "superblock": true, 00:29:15.408 "num_base_bdevs": 2, 00:29:15.408 "num_base_bdevs_discovered": 2, 00:29:15.408 "num_base_bdevs_operational": 2, 00:29:15.408 "base_bdevs_list": [ 00:29:15.408 { 00:29:15.408 "name": "pt1", 00:29:15.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:15.408 "is_configured": true, 00:29:15.408 "data_offset": 256, 00:29:15.408 "data_size": 7936 00:29:15.408 }, 00:29:15.408 { 00:29:15.408 "name": "pt2", 00:29:15.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:15.408 "is_configured": true, 00:29:15.408 "data_offset": 256, 00:29:15.408 "data_size": 7936 00:29:15.408 } 00:29:15.408 ] 00:29:15.408 }' 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:15.408 07:50:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.667 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.667 [2024-10-07 07:50:15.206518] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.926 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.926 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.926 "name": "raid_bdev1", 00:29:15.926 "aliases": [ 00:29:15.926 "0b6fada7-1555-4e78-a456-760d24926fc1" 00:29:15.926 ], 00:29:15.926 "product_name": "Raid Volume", 00:29:15.926 "block_size": 4096, 00:29:15.926 "num_blocks": 7936, 00:29:15.926 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:15.926 "assigned_rate_limits": { 00:29:15.926 "rw_ios_per_sec": 0, 00:29:15.926 "rw_mbytes_per_sec": 0, 00:29:15.926 "r_mbytes_per_sec": 0, 00:29:15.926 "w_mbytes_per_sec": 0 00:29:15.926 }, 00:29:15.926 "claimed": false, 00:29:15.926 "zoned": false, 00:29:15.926 "supported_io_types": { 00:29:15.926 "read": true, 00:29:15.926 "write": true, 00:29:15.926 "unmap": false, 00:29:15.926 "flush": false, 00:29:15.927 "reset": true, 00:29:15.927 "nvme_admin": false, 00:29:15.927 "nvme_io": false, 00:29:15.927 "nvme_io_md": false, 00:29:15.927 "write_zeroes": true, 00:29:15.927 "zcopy": false, 00:29:15.927 "get_zone_info": false, 00:29:15.927 "zone_management": false, 00:29:15.927 "zone_append": false, 00:29:15.927 "compare": false, 00:29:15.927 "compare_and_write": false, 00:29:15.927 "abort": false, 00:29:15.927 "seek_hole": false, 00:29:15.927 "seek_data": false, 00:29:15.927 "copy": false, 00:29:15.927 "nvme_iov_md": false 00:29:15.927 }, 00:29:15.927 "memory_domains": [ 00:29:15.927 { 00:29:15.927 "dma_device_id": "system", 00:29:15.927 "dma_device_type": 1 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:15.927 "dma_device_type": 2 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "dma_device_id": "system", 00:29:15.927 "dma_device_type": 1 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:15.927 "dma_device_type": 2 00:29:15.927 } 00:29:15.927 ], 00:29:15.927 "driver_specific": { 00:29:15.927 "raid": { 00:29:15.927 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:15.927 "strip_size_kb": 0, 00:29:15.927 "state": "online", 00:29:15.927 "raid_level": "raid1", 00:29:15.927 "superblock": true, 00:29:15.927 "num_base_bdevs": 2, 00:29:15.927 "num_base_bdevs_discovered": 2, 00:29:15.927 "num_base_bdevs_operational": 2, 00:29:15.927 "base_bdevs_list": [ 00:29:15.927 { 00:29:15.927 "name": "pt1", 00:29:15.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:15.927 "is_configured": true, 00:29:15.927 "data_offset": 256, 00:29:15.927 "data_size": 7936 00:29:15.927 }, 00:29:15.927 { 00:29:15.927 "name": "pt2", 00:29:15.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:15.927 "is_configured": true, 00:29:15.927 "data_offset": 256, 00:29:15.927 "data_size": 7936 00:29:15.927 } 00:29:15.927 ] 00:29:15.927 } 00:29:15.927 } 00:29:15.927 }' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:15.927 pt2' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 [2024-10-07 07:50:15.414563] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 0b6fada7-1555-4e78-a456-760d24926fc1 '!=' 0b6fada7-1555-4e78-a456-760d24926fc1 ']' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 [2024-10-07 07:50:15.446357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.186 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.186 "name": "raid_bdev1", 00:29:16.186 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:16.186 "strip_size_kb": 0, 00:29:16.186 "state": "online", 00:29:16.186 "raid_level": "raid1", 00:29:16.186 "superblock": true, 00:29:16.186 "num_base_bdevs": 2, 00:29:16.186 "num_base_bdevs_discovered": 1, 00:29:16.186 "num_base_bdevs_operational": 1, 00:29:16.186 "base_bdevs_list": [ 00:29:16.186 { 00:29:16.186 "name": null, 00:29:16.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.186 "is_configured": false, 00:29:16.186 "data_offset": 0, 00:29:16.186 "data_size": 7936 00:29:16.186 }, 00:29:16.186 { 00:29:16.186 "name": "pt2", 00:29:16.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.187 "is_configured": true, 00:29:16.187 "data_offset": 256, 00:29:16.187 "data_size": 7936 00:29:16.187 } 00:29:16.187 ] 00:29:16.187 }' 00:29:16.187 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.187 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.446 [2024-10-07 07:50:15.858421] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:16.446 [2024-10-07 07:50:15.858572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:16.446 [2024-10-07 07:50:15.858694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:16.446 [2024-10-07 07:50:15.858761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:16.446 [2024-10-07 07:50:15.858778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.446 [2024-10-07 07:50:15.934424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:16.446 [2024-10-07 07:50:15.934486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.446 [2024-10-07 07:50:15.934506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:16.446 [2024-10-07 07:50:15.934521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.446 [2024-10-07 07:50:15.937077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.446 [2024-10-07 07:50:15.937231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:16.446 [2024-10-07 07:50:15.937333] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:16.446 [2024-10-07 07:50:15.937387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:16.446 [2024-10-07 07:50:15.937503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:16.446 [2024-10-07 07:50:15.937520] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:16.446 [2024-10-07 07:50:15.937785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:16.446 [2024-10-07 07:50:15.937938] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:16.446 [2024-10-07 07:50:15.937949] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:16.446 [2024-10-07 07:50:15.938098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.446 pt2 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:16.446 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.447 "name": "raid_bdev1", 00:29:16.447 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:16.447 "strip_size_kb": 0, 00:29:16.447 "state": "online", 00:29:16.447 "raid_level": "raid1", 00:29:16.447 "superblock": true, 00:29:16.447 "num_base_bdevs": 2, 00:29:16.447 "num_base_bdevs_discovered": 1, 00:29:16.447 "num_base_bdevs_operational": 1, 00:29:16.447 "base_bdevs_list": [ 00:29:16.447 { 00:29:16.447 "name": null, 00:29:16.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.447 "is_configured": false, 00:29:16.447 "data_offset": 256, 00:29:16.447 "data_size": 7936 00:29:16.447 }, 00:29:16.447 { 00:29:16.447 "name": "pt2", 00:29:16.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.447 "is_configured": true, 00:29:16.447 "data_offset": 256, 00:29:16.447 "data_size": 7936 00:29:16.447 } 00:29:16.447 ] 00:29:16.447 }' 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.447 07:50:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.016 [2024-10-07 07:50:16.322514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.016 [2024-10-07 07:50:16.322556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:17.016 [2024-10-07 07:50:16.322655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:17.016 [2024-10-07 07:50:16.322759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:17.016 [2024-10-07 07:50:16.322780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.016 [2024-10-07 07:50:16.374532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:17.016 [2024-10-07 07:50:16.374601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.016 [2024-10-07 07:50:16.374625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:29:17.016 [2024-10-07 07:50:16.374637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.016 [2024-10-07 07:50:16.377269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.016 [2024-10-07 07:50:16.377313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:17.016 [2024-10-07 07:50:16.377412] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:17.016 [2024-10-07 07:50:16.377463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:17.016 [2024-10-07 07:50:16.377601] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:17.016 [2024-10-07 07:50:16.377613] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.016 [2024-10-07 07:50:16.377635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:17.016 [2024-10-07 07:50:16.377732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:17.016 [2024-10-07 07:50:16.377814] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:17.016 [2024-10-07 07:50:16.377824] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:17.016 [2024-10-07 07:50:16.378081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:17.016 [2024-10-07 07:50:16.378218] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:17.016 [2024-10-07 07:50:16.378232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:17.016 [2024-10-07 07:50:16.378383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.016 pt1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.016 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.017 "name": "raid_bdev1", 00:29:17.017 "uuid": "0b6fada7-1555-4e78-a456-760d24926fc1", 00:29:17.017 "strip_size_kb": 0, 00:29:17.017 "state": "online", 00:29:17.017 "raid_level": "raid1", 00:29:17.017 "superblock": true, 00:29:17.017 "num_base_bdevs": 2, 00:29:17.017 "num_base_bdevs_discovered": 1, 00:29:17.017 "num_base_bdevs_operational": 1, 00:29:17.017 "base_bdevs_list": [ 00:29:17.017 { 00:29:17.017 "name": null, 00:29:17.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.017 "is_configured": false, 00:29:17.017 "data_offset": 256, 00:29:17.017 "data_size": 7936 00:29:17.017 }, 00:29:17.017 { 00:29:17.017 "name": "pt2", 00:29:17.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:17.017 "is_configured": true, 00:29:17.017 "data_offset": 256, 00:29:17.017 "data_size": 7936 00:29:17.017 } 00:29:17.017 ] 00:29:17.017 }' 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.017 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.276 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:17.276 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:17.276 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.276 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.276 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:17.535 [2024-10-07 07:50:16.862896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 0b6fada7-1555-4e78-a456-760d24926fc1 '!=' 0b6fada7-1555-4e78-a456-760d24926fc1 ']' 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86445 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' -z 86445 ']' 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # kill -0 86445 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # uname 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 86445 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:17.535 killing process with pid 86445 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # echo 'killing process with pid 86445' 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # kill 86445 00:29:17.535 [2024-10-07 07:50:16.940703] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:17.535 07:50:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@977 -- # wait 86445 00:29:17.535 [2024-10-07 07:50:16.940829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:17.535 [2024-10-07 07:50:16.940899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:17.535 [2024-10-07 07:50:16.940920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:17.793 [2024-10-07 07:50:17.161101] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:19.173 ************************************ 00:29:19.173 END TEST raid_superblock_test_4k 00:29:19.173 ************************************ 00:29:19.173 07:50:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:29:19.173 00:29:19.173 real 0m6.230s 00:29:19.173 user 0m9.238s 00:29:19.173 sys 0m1.157s 00:29:19.173 07:50:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:19.173 07:50:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:29:19.173 07:50:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:29:19.173 07:50:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:29:19.173 07:50:18 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:29:19.173 07:50:18 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:19.173 07:50:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.173 ************************************ 00:29:19.173 START TEST raid_rebuild_test_sb_4k 00:29:19.173 ************************************ 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 true false true 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86772 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86772 00:29:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # '[' -z 86772 ']' 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:19.173 07:50:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:19.173 [2024-10-07 07:50:18.665109] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:19.173 [2024-10-07 07:50:18.665455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86772 ] 00:29:19.173 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:19.173 Zero copy mechanism will not be used. 00:29:19.432 [2024-10-07 07:50:18.845734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.691 [2024-10-07 07:50:19.064282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.949 [2024-10-07 07:50:19.281338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:19.949 [2024-10-07 07:50:19.281385] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # return 0 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 BaseBdev1_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 [2024-10-07 07:50:19.571809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:20.208 [2024-10-07 07:50:19.571880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.208 [2024-10-07 07:50:19.571907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:20.208 [2024-10-07 07:50:19.571926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.208 [2024-10-07 07:50:19.574323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.208 [2024-10-07 07:50:19.574496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:20.208 BaseBdev1 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 BaseBdev2_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 [2024-10-07 07:50:19.644521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:20.208 [2024-10-07 07:50:19.644746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.208 [2024-10-07 07:50:19.644778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:20.208 [2024-10-07 07:50:19.644795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.208 [2024-10-07 07:50:19.647175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.208 [2024-10-07 07:50:19.647221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:20.208 BaseBdev2 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 spare_malloc 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 spare_delay 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 [2024-10-07 07:50:19.703594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:20.208 [2024-10-07 07:50:19.703660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.208 [2024-10-07 07:50:19.703703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:20.208 [2024-10-07 07:50:19.703718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.208 [2024-10-07 07:50:19.706382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.208 [2024-10-07 07:50:19.706434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:20.208 spare 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.208 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 [2024-10-07 07:50:19.711666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:20.208 [2024-10-07 07:50:19.713981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:20.208 [2024-10-07 07:50:19.714172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:20.209 [2024-10-07 07:50:19.714189] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:20.209 [2024-10-07 07:50:19.714490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:20.209 [2024-10-07 07:50:19.714656] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:20.209 [2024-10-07 07:50:19.714666] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:20.209 [2024-10-07 07:50:19.714855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:20.209 "name": "raid_bdev1", 00:29:20.209 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:20.209 "strip_size_kb": 0, 00:29:20.209 "state": "online", 00:29:20.209 "raid_level": "raid1", 00:29:20.209 "superblock": true, 00:29:20.209 "num_base_bdevs": 2, 00:29:20.209 "num_base_bdevs_discovered": 2, 00:29:20.209 "num_base_bdevs_operational": 2, 00:29:20.209 "base_bdevs_list": [ 00:29:20.209 { 00:29:20.209 "name": "BaseBdev1", 00:29:20.209 "uuid": "ec593cd4-1f74-5822-be04-265f217cfb8c", 00:29:20.209 "is_configured": true, 00:29:20.209 "data_offset": 256, 00:29:20.209 "data_size": 7936 00:29:20.209 }, 00:29:20.209 { 00:29:20.209 "name": "BaseBdev2", 00:29:20.209 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:20.209 "is_configured": true, 00:29:20.209 "data_offset": 256, 00:29:20.209 "data_size": 7936 00:29:20.209 } 00:29:20.209 ] 00:29:20.209 }' 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:20.209 07:50:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:20.777 [2024-10-07 07:50:20.164042] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:20.777 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:21.035 [2024-10-07 07:50:20.527894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:21.035 /dev/nbd0 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local i 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # break 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.035 1+0 records in 00:29:21.035 1+0 records out 00:29:21.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296375 s, 13.8 MB/s 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # size=4096 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # return 0 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:21.035 07:50:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:29:21.968 7936+0 records in 00:29:21.968 7936+0 records out 00:29:21.968 32505856 bytes (33 MB, 31 MiB) copied, 0.754181 s, 43.1 MB/s 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.968 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:22.227 [2024-10-07 07:50:21.634116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:22.227 [2024-10-07 07:50:21.646249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.227 "name": "raid_bdev1", 00:29:22.227 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:22.227 "strip_size_kb": 0, 00:29:22.227 "state": "online", 00:29:22.227 "raid_level": "raid1", 00:29:22.227 "superblock": true, 00:29:22.227 "num_base_bdevs": 2, 00:29:22.227 "num_base_bdevs_discovered": 1, 00:29:22.227 "num_base_bdevs_operational": 1, 00:29:22.227 "base_bdevs_list": [ 00:29:22.227 { 00:29:22.227 "name": null, 00:29:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.227 "is_configured": false, 00:29:22.227 "data_offset": 0, 00:29:22.227 "data_size": 7936 00:29:22.227 }, 00:29:22.227 { 00:29:22.227 "name": "BaseBdev2", 00:29:22.227 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:22.227 "is_configured": true, 00:29:22.227 "data_offset": 256, 00:29:22.227 "data_size": 7936 00:29:22.227 } 00:29:22.227 ] 00:29:22.227 }' 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.227 07:50:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:22.794 07:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:22.794 07:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:22.794 07:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:22.794 [2024-10-07 07:50:22.078350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:22.794 [2024-10-07 07:50:22.093976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:29:22.794 07:50:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:22.794 07:50:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:22.794 [2024-10-07 07:50:22.096158] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:23.749 "name": "raid_bdev1", 00:29:23.749 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:23.749 "strip_size_kb": 0, 00:29:23.749 "state": "online", 00:29:23.749 "raid_level": "raid1", 00:29:23.749 "superblock": true, 00:29:23.749 "num_base_bdevs": 2, 00:29:23.749 "num_base_bdevs_discovered": 2, 00:29:23.749 "num_base_bdevs_operational": 2, 00:29:23.749 "process": { 00:29:23.749 "type": "rebuild", 00:29:23.749 "target": "spare", 00:29:23.749 "progress": { 00:29:23.749 "blocks": 2560, 00:29:23.749 "percent": 32 00:29:23.749 } 00:29:23.749 }, 00:29:23.749 "base_bdevs_list": [ 00:29:23.749 { 00:29:23.749 "name": "spare", 00:29:23.749 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:23.749 "is_configured": true, 00:29:23.749 "data_offset": 256, 00:29:23.749 "data_size": 7936 00:29:23.749 }, 00:29:23.749 { 00:29:23.749 "name": "BaseBdev2", 00:29:23.749 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:23.749 "is_configured": true, 00:29:23.749 "data_offset": 256, 00:29:23.749 "data_size": 7936 00:29:23.749 } 00:29:23.749 ] 00:29:23.749 }' 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:23.749 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:23.749 [2024-10-07 07:50:23.245769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:23.749 [2024-10-07 07:50:23.304118] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:23.749 [2024-10-07 07:50:23.304201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.749 [2024-10-07 07:50:23.304219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:23.750 [2024-10-07 07:50:23.304231] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.009 "name": "raid_bdev1", 00:29:24.009 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:24.009 "strip_size_kb": 0, 00:29:24.009 "state": "online", 00:29:24.009 "raid_level": "raid1", 00:29:24.009 "superblock": true, 00:29:24.009 "num_base_bdevs": 2, 00:29:24.009 "num_base_bdevs_discovered": 1, 00:29:24.009 "num_base_bdevs_operational": 1, 00:29:24.009 "base_bdevs_list": [ 00:29:24.009 { 00:29:24.009 "name": null, 00:29:24.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.009 "is_configured": false, 00:29:24.009 "data_offset": 0, 00:29:24.009 "data_size": 7936 00:29:24.009 }, 00:29:24.009 { 00:29:24.009 "name": "BaseBdev2", 00:29:24.009 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:24.009 "is_configured": true, 00:29:24.009 "data_offset": 256, 00:29:24.009 "data_size": 7936 00:29:24.009 } 00:29:24.009 ] 00:29:24.009 }' 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.009 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:24.268 "name": "raid_bdev1", 00:29:24.268 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:24.268 "strip_size_kb": 0, 00:29:24.268 "state": "online", 00:29:24.268 "raid_level": "raid1", 00:29:24.268 "superblock": true, 00:29:24.268 "num_base_bdevs": 2, 00:29:24.268 "num_base_bdevs_discovered": 1, 00:29:24.268 "num_base_bdevs_operational": 1, 00:29:24.268 "base_bdevs_list": [ 00:29:24.268 { 00:29:24.268 "name": null, 00:29:24.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.268 "is_configured": false, 00:29:24.268 "data_offset": 0, 00:29:24.268 "data_size": 7936 00:29:24.268 }, 00:29:24.268 { 00:29:24.268 "name": "BaseBdev2", 00:29:24.268 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:24.268 "is_configured": true, 00:29:24.268 "data_offset": 256, 00:29:24.268 "data_size": 7936 00:29:24.268 } 00:29:24.268 ] 00:29:24.268 }' 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:24.268 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:24.526 [2024-10-07 07:50:23.863661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:24.526 [2024-10-07 07:50:23.879531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:24.526 07:50:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:24.526 [2024-10-07 07:50:23.881775] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:25.460 "name": "raid_bdev1", 00:29:25.460 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:25.460 "strip_size_kb": 0, 00:29:25.460 "state": "online", 00:29:25.460 "raid_level": "raid1", 00:29:25.460 "superblock": true, 00:29:25.460 "num_base_bdevs": 2, 00:29:25.460 "num_base_bdevs_discovered": 2, 00:29:25.460 "num_base_bdevs_operational": 2, 00:29:25.460 "process": { 00:29:25.460 "type": "rebuild", 00:29:25.460 "target": "spare", 00:29:25.460 "progress": { 00:29:25.460 "blocks": 2560, 00:29:25.460 "percent": 32 00:29:25.460 } 00:29:25.460 }, 00:29:25.460 "base_bdevs_list": [ 00:29:25.460 { 00:29:25.460 "name": "spare", 00:29:25.460 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:25.460 "is_configured": true, 00:29:25.460 "data_offset": 256, 00:29:25.460 "data_size": 7936 00:29:25.460 }, 00:29:25.460 { 00:29:25.460 "name": "BaseBdev2", 00:29:25.460 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:25.460 "is_configured": true, 00:29:25.460 "data_offset": 256, 00:29:25.460 "data_size": 7936 00:29:25.460 } 00:29:25.460 ] 00:29:25.460 }' 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.460 07:50:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:25.719 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=715 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:25.719 "name": "raid_bdev1", 00:29:25.719 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:25.719 "strip_size_kb": 0, 00:29:25.719 "state": "online", 00:29:25.719 "raid_level": "raid1", 00:29:25.719 "superblock": true, 00:29:25.719 "num_base_bdevs": 2, 00:29:25.719 "num_base_bdevs_discovered": 2, 00:29:25.719 "num_base_bdevs_operational": 2, 00:29:25.719 "process": { 00:29:25.719 "type": "rebuild", 00:29:25.719 "target": "spare", 00:29:25.719 "progress": { 00:29:25.719 "blocks": 2816, 00:29:25.719 "percent": 35 00:29:25.719 } 00:29:25.719 }, 00:29:25.719 "base_bdevs_list": [ 00:29:25.719 { 00:29:25.719 "name": "spare", 00:29:25.719 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:25.719 "is_configured": true, 00:29:25.719 "data_offset": 256, 00:29:25.719 "data_size": 7936 00:29:25.719 }, 00:29:25.719 { 00:29:25.719 "name": "BaseBdev2", 00:29:25.719 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:25.719 "is_configured": true, 00:29:25.719 "data_offset": 256, 00:29:25.719 "data_size": 7936 00:29:25.719 } 00:29:25.719 ] 00:29:25.719 }' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.719 07:50:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:26.653 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:26.911 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:26.911 "name": "raid_bdev1", 00:29:26.911 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:26.911 "strip_size_kb": 0, 00:29:26.911 "state": "online", 00:29:26.911 "raid_level": "raid1", 00:29:26.911 "superblock": true, 00:29:26.911 "num_base_bdevs": 2, 00:29:26.911 "num_base_bdevs_discovered": 2, 00:29:26.911 "num_base_bdevs_operational": 2, 00:29:26.911 "process": { 00:29:26.911 "type": "rebuild", 00:29:26.911 "target": "spare", 00:29:26.911 "progress": { 00:29:26.911 "blocks": 5632, 00:29:26.911 "percent": 70 00:29:26.911 } 00:29:26.911 }, 00:29:26.911 "base_bdevs_list": [ 00:29:26.911 { 00:29:26.911 "name": "spare", 00:29:26.911 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:26.911 "is_configured": true, 00:29:26.911 "data_offset": 256, 00:29:26.911 "data_size": 7936 00:29:26.911 }, 00:29:26.911 { 00:29:26.911 "name": "BaseBdev2", 00:29:26.911 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:26.911 "is_configured": true, 00:29:26.911 "data_offset": 256, 00:29:26.911 "data_size": 7936 00:29:26.911 } 00:29:26.911 ] 00:29:26.911 }' 00:29:26.911 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:26.911 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.912 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:26.912 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.912 07:50:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:27.479 [2024-10-07 07:50:27.001840] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:27.479 [2024-10-07 07:50:27.001936] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:27.479 [2024-10-07 07:50:27.002057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:28.047 "name": "raid_bdev1", 00:29:28.047 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:28.047 "strip_size_kb": 0, 00:29:28.047 "state": "online", 00:29:28.047 "raid_level": "raid1", 00:29:28.047 "superblock": true, 00:29:28.047 "num_base_bdevs": 2, 00:29:28.047 "num_base_bdevs_discovered": 2, 00:29:28.047 "num_base_bdevs_operational": 2, 00:29:28.047 "base_bdevs_list": [ 00:29:28.047 { 00:29:28.047 "name": "spare", 00:29:28.047 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:28.047 "is_configured": true, 00:29:28.047 "data_offset": 256, 00:29:28.047 "data_size": 7936 00:29:28.047 }, 00:29:28.047 { 00:29:28.047 "name": "BaseBdev2", 00:29:28.047 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:28.047 "is_configured": true, 00:29:28.047 "data_offset": 256, 00:29:28.047 "data_size": 7936 00:29:28.047 } 00:29:28.047 ] 00:29:28.047 }' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:28.047 "name": "raid_bdev1", 00:29:28.047 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:28.047 "strip_size_kb": 0, 00:29:28.047 "state": "online", 00:29:28.047 "raid_level": "raid1", 00:29:28.047 "superblock": true, 00:29:28.047 "num_base_bdevs": 2, 00:29:28.047 "num_base_bdevs_discovered": 2, 00:29:28.047 "num_base_bdevs_operational": 2, 00:29:28.047 "base_bdevs_list": [ 00:29:28.047 { 00:29:28.047 "name": "spare", 00:29:28.047 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:28.047 "is_configured": true, 00:29:28.047 "data_offset": 256, 00:29:28.047 "data_size": 7936 00:29:28.047 }, 00:29:28.047 { 00:29:28.047 "name": "BaseBdev2", 00:29:28.047 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:28.047 "is_configured": true, 00:29:28.047 "data_offset": 256, 00:29:28.047 "data_size": 7936 00:29:28.047 } 00:29:28.047 ] 00:29:28.047 }' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.047 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.307 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.307 "name": "raid_bdev1", 00:29:28.307 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:28.307 "strip_size_kb": 0, 00:29:28.307 "state": "online", 00:29:28.307 "raid_level": "raid1", 00:29:28.307 "superblock": true, 00:29:28.307 "num_base_bdevs": 2, 00:29:28.307 "num_base_bdevs_discovered": 2, 00:29:28.307 "num_base_bdevs_operational": 2, 00:29:28.307 "base_bdevs_list": [ 00:29:28.307 { 00:29:28.307 "name": "spare", 00:29:28.307 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:28.307 "is_configured": true, 00:29:28.307 "data_offset": 256, 00:29:28.307 "data_size": 7936 00:29:28.307 }, 00:29:28.307 { 00:29:28.307 "name": "BaseBdev2", 00:29:28.307 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:28.307 "is_configured": true, 00:29:28.307 "data_offset": 256, 00:29:28.307 "data_size": 7936 00:29:28.307 } 00:29:28.307 ] 00:29:28.307 }' 00:29:28.307 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.307 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.567 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:28.567 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.567 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.567 [2024-10-07 07:50:27.996741] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:28.567 [2024-10-07 07:50:27.996924] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:28.567 [2024-10-07 07:50:27.997120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:28.567 [2024-10-07 07:50:27.997203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:28.567 [2024-10-07 07:50:27.997217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:28.567 07:50:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:28.567 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:28.826 /dev/nbd0 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local i 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # break 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.826 1+0 records in 00:29:28.826 1+0 records out 00:29:28.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545932 s, 7.5 MB/s 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # size=4096 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # return 0 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:28.826 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:29.084 /dev/nbd1 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local i 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # break 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.084 1+0 records in 00:29:29.084 1+0 records out 00:29:29.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044104 s, 9.3 MB/s 00:29:29.084 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # size=4096 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # return 0 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:29.085 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.343 07:50:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.603 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.880 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:29.880 [2024-10-07 07:50:29.257361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:29.880 [2024-10-07 07:50:29.257542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.880 [2024-10-07 07:50:29.257659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:29.880 [2024-10-07 07:50:29.257677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.880 [2024-10-07 07:50:29.260217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.880 [2024-10-07 07:50:29.260259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:29.881 [2024-10-07 07:50:29.260357] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:29.881 [2024-10-07 07:50:29.260408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:29.881 [2024-10-07 07:50:29.260614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:29.881 spare 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:29.881 [2024-10-07 07:50:29.360719] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:29.881 [2024-10-07 07:50:29.360782] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:29.881 [2024-10-07 07:50:29.361150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:29:29.881 [2024-10-07 07:50:29.361369] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:29.881 [2024-10-07 07:50:29.361381] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:29.881 [2024-10-07 07:50:29.361605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.881 "name": "raid_bdev1", 00:29:29.881 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:29.881 "strip_size_kb": 0, 00:29:29.881 "state": "online", 00:29:29.881 "raid_level": "raid1", 00:29:29.881 "superblock": true, 00:29:29.881 "num_base_bdevs": 2, 00:29:29.881 "num_base_bdevs_discovered": 2, 00:29:29.881 "num_base_bdevs_operational": 2, 00:29:29.881 "base_bdevs_list": [ 00:29:29.881 { 00:29:29.881 "name": "spare", 00:29:29.881 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:29.881 "is_configured": true, 00:29:29.881 "data_offset": 256, 00:29:29.881 "data_size": 7936 00:29:29.881 }, 00:29:29.881 { 00:29:29.881 "name": "BaseBdev2", 00:29:29.881 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:29.881 "is_configured": true, 00:29:29.881 "data_offset": 256, 00:29:29.881 "data_size": 7936 00:29:29.881 } 00:29:29.881 ] 00:29:29.881 }' 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.881 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:30.449 "name": "raid_bdev1", 00:29:30.449 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:30.449 "strip_size_kb": 0, 00:29:30.449 "state": "online", 00:29:30.449 "raid_level": "raid1", 00:29:30.449 "superblock": true, 00:29:30.449 "num_base_bdevs": 2, 00:29:30.449 "num_base_bdevs_discovered": 2, 00:29:30.449 "num_base_bdevs_operational": 2, 00:29:30.449 "base_bdevs_list": [ 00:29:30.449 { 00:29:30.449 "name": "spare", 00:29:30.449 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:30.449 "is_configured": true, 00:29:30.449 "data_offset": 256, 00:29:30.449 "data_size": 7936 00:29:30.449 }, 00:29:30.449 { 00:29:30.449 "name": "BaseBdev2", 00:29:30.449 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:30.449 "is_configured": true, 00:29:30.449 "data_offset": 256, 00:29:30.449 "data_size": 7936 00:29:30.449 } 00:29:30.449 ] 00:29:30.449 }' 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.449 [2024-10-07 07:50:29.989746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.449 07:50:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.449 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.449 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.709 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.709 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:30.709 "name": "raid_bdev1", 00:29:30.709 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:30.709 "strip_size_kb": 0, 00:29:30.709 "state": "online", 00:29:30.709 "raid_level": "raid1", 00:29:30.709 "superblock": true, 00:29:30.709 "num_base_bdevs": 2, 00:29:30.709 "num_base_bdevs_discovered": 1, 00:29:30.709 "num_base_bdevs_operational": 1, 00:29:30.709 "base_bdevs_list": [ 00:29:30.709 { 00:29:30.709 "name": null, 00:29:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:30.709 "is_configured": false, 00:29:30.709 "data_offset": 0, 00:29:30.709 "data_size": 7936 00:29:30.709 }, 00:29:30.709 { 00:29:30.709 "name": "BaseBdev2", 00:29:30.709 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:30.709 "is_configured": true, 00:29:30.709 "data_offset": 256, 00:29:30.709 "data_size": 7936 00:29:30.709 } 00:29:30.709 ] 00:29:30.709 }' 00:29:30.709 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:30.709 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.968 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:30.968 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:30.968 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:30.968 [2024-10-07 07:50:30.437872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:30.968 [2024-10-07 07:50:30.438065] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:30.968 [2024-10-07 07:50:30.438084] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:30.968 [2024-10-07 07:50:30.438130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:30.968 [2024-10-07 07:50:30.454009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:29:30.968 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:30.968 07:50:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:30.968 [2024-10-07 07:50:30.456172] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:31.905 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.905 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:31.905 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:31.905 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:31.905 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:32.165 "name": "raid_bdev1", 00:29:32.165 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:32.165 "strip_size_kb": 0, 00:29:32.165 "state": "online", 00:29:32.165 "raid_level": "raid1", 00:29:32.165 "superblock": true, 00:29:32.165 "num_base_bdevs": 2, 00:29:32.165 "num_base_bdevs_discovered": 2, 00:29:32.165 "num_base_bdevs_operational": 2, 00:29:32.165 "process": { 00:29:32.165 "type": "rebuild", 00:29:32.165 "target": "spare", 00:29:32.165 "progress": { 00:29:32.165 "blocks": 2560, 00:29:32.165 "percent": 32 00:29:32.165 } 00:29:32.165 }, 00:29:32.165 "base_bdevs_list": [ 00:29:32.165 { 00:29:32.165 "name": "spare", 00:29:32.165 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:32.165 "is_configured": true, 00:29:32.165 "data_offset": 256, 00:29:32.165 "data_size": 7936 00:29:32.165 }, 00:29:32.165 { 00:29:32.165 "name": "BaseBdev2", 00:29:32.165 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:32.165 "is_configured": true, 00:29:32.165 "data_offset": 256, 00:29:32.165 "data_size": 7936 00:29:32.165 } 00:29:32.165 ] 00:29:32.165 }' 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:32.165 [2024-10-07 07:50:31.601638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:32.165 [2024-10-07 07:50:31.663988] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:32.165 [2024-10-07 07:50:31.664325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.165 [2024-10-07 07:50:31.664452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:32.165 [2024-10-07 07:50:31.664503] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:32.165 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.424 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.424 "name": "raid_bdev1", 00:29:32.424 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:32.424 "strip_size_kb": 0, 00:29:32.424 "state": "online", 00:29:32.424 "raid_level": "raid1", 00:29:32.424 "superblock": true, 00:29:32.424 "num_base_bdevs": 2, 00:29:32.424 "num_base_bdevs_discovered": 1, 00:29:32.424 "num_base_bdevs_operational": 1, 00:29:32.424 "base_bdevs_list": [ 00:29:32.424 { 00:29:32.424 "name": null, 00:29:32.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.424 "is_configured": false, 00:29:32.424 "data_offset": 0, 00:29:32.424 "data_size": 7936 00:29:32.424 }, 00:29:32.424 { 00:29:32.424 "name": "BaseBdev2", 00:29:32.424 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:32.424 "is_configured": true, 00:29:32.424 "data_offset": 256, 00:29:32.424 "data_size": 7936 00:29:32.424 } 00:29:32.424 ] 00:29:32.424 }' 00:29:32.424 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.424 07:50:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:32.684 07:50:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:32.684 07:50:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:32.684 07:50:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:32.684 [2024-10-07 07:50:32.159291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:32.684 [2024-10-07 07:50:32.159368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.684 [2024-10-07 07:50:32.159393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:32.684 [2024-10-07 07:50:32.159408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.684 [2024-10-07 07:50:32.159950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.684 [2024-10-07 07:50:32.160041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:32.684 [2024-10-07 07:50:32.160151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:32.684 [2024-10-07 07:50:32.160170] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:32.684 [2024-10-07 07:50:32.160183] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:32.684 [2024-10-07 07:50:32.160211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:32.684 [2024-10-07 07:50:32.176495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:29:32.684 spare 00:29:32.684 07:50:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:32.684 07:50:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:32.684 [2024-10-07 07:50:32.178794] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:34.063 "name": "raid_bdev1", 00:29:34.063 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:34.063 "strip_size_kb": 0, 00:29:34.063 "state": "online", 00:29:34.063 "raid_level": "raid1", 00:29:34.063 "superblock": true, 00:29:34.063 "num_base_bdevs": 2, 00:29:34.063 "num_base_bdevs_discovered": 2, 00:29:34.063 "num_base_bdevs_operational": 2, 00:29:34.063 "process": { 00:29:34.063 "type": "rebuild", 00:29:34.063 "target": "spare", 00:29:34.063 "progress": { 00:29:34.063 "blocks": 2560, 00:29:34.063 "percent": 32 00:29:34.063 } 00:29:34.063 }, 00:29:34.063 "base_bdevs_list": [ 00:29:34.063 { 00:29:34.063 "name": "spare", 00:29:34.063 "uuid": "d699d71a-322b-535a-b236-d6690024536c", 00:29:34.063 "is_configured": true, 00:29:34.063 "data_offset": 256, 00:29:34.063 "data_size": 7936 00:29:34.063 }, 00:29:34.063 { 00:29:34.063 "name": "BaseBdev2", 00:29:34.063 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:34.063 "is_configured": true, 00:29:34.063 "data_offset": 256, 00:29:34.063 "data_size": 7936 00:29:34.063 } 00:29:34.063 ] 00:29:34.063 }' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.063 [2024-10-07 07:50:33.320385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.063 [2024-10-07 07:50:33.386674] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:34.063 [2024-10-07 07:50:33.386894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.063 [2024-10-07 07:50:33.386920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:34.063 [2024-10-07 07:50:33.386931] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.063 "name": "raid_bdev1", 00:29:34.063 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:34.063 "strip_size_kb": 0, 00:29:34.063 "state": "online", 00:29:34.063 "raid_level": "raid1", 00:29:34.063 "superblock": true, 00:29:34.063 "num_base_bdevs": 2, 00:29:34.063 "num_base_bdevs_discovered": 1, 00:29:34.063 "num_base_bdevs_operational": 1, 00:29:34.063 "base_bdevs_list": [ 00:29:34.063 { 00:29:34.063 "name": null, 00:29:34.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.063 "is_configured": false, 00:29:34.063 "data_offset": 0, 00:29:34.063 "data_size": 7936 00:29:34.063 }, 00:29:34.063 { 00:29:34.063 "name": "BaseBdev2", 00:29:34.063 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:34.063 "is_configured": true, 00:29:34.063 "data_offset": 256, 00:29:34.063 "data_size": 7936 00:29:34.063 } 00:29:34.063 ] 00:29:34.063 }' 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.063 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.323 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.582 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:34.583 "name": "raid_bdev1", 00:29:34.583 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:34.583 "strip_size_kb": 0, 00:29:34.583 "state": "online", 00:29:34.583 "raid_level": "raid1", 00:29:34.583 "superblock": true, 00:29:34.583 "num_base_bdevs": 2, 00:29:34.583 "num_base_bdevs_discovered": 1, 00:29:34.583 "num_base_bdevs_operational": 1, 00:29:34.583 "base_bdevs_list": [ 00:29:34.583 { 00:29:34.583 "name": null, 00:29:34.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.583 "is_configured": false, 00:29:34.583 "data_offset": 0, 00:29:34.583 "data_size": 7936 00:29:34.583 }, 00:29:34.583 { 00:29:34.583 "name": "BaseBdev2", 00:29:34.583 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:34.583 "is_configured": true, 00:29:34.583 "data_offset": 256, 00:29:34.583 "data_size": 7936 00:29:34.583 } 00:29:34.583 ] 00:29:34.583 }' 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:34.583 07:50:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:34.583 [2024-10-07 07:50:34.012482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:34.583 [2024-10-07 07:50:34.012542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.583 [2024-10-07 07:50:34.012568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:34.583 [2024-10-07 07:50:34.012580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.583 [2024-10-07 07:50:34.013069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.583 [2024-10-07 07:50:34.013089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:34.583 [2024-10-07 07:50:34.013169] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:34.583 [2024-10-07 07:50:34.013184] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:34.583 [2024-10-07 07:50:34.013198] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:34.583 [2024-10-07 07:50:34.013210] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:34.583 BaseBdev1 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:34.583 07:50:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.520 "name": "raid_bdev1", 00:29:35.520 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:35.520 "strip_size_kb": 0, 00:29:35.520 "state": "online", 00:29:35.520 "raid_level": "raid1", 00:29:35.520 "superblock": true, 00:29:35.520 "num_base_bdevs": 2, 00:29:35.520 "num_base_bdevs_discovered": 1, 00:29:35.520 "num_base_bdevs_operational": 1, 00:29:35.520 "base_bdevs_list": [ 00:29:35.520 { 00:29:35.520 "name": null, 00:29:35.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.520 "is_configured": false, 00:29:35.520 "data_offset": 0, 00:29:35.520 "data_size": 7936 00:29:35.520 }, 00:29:35.520 { 00:29:35.520 "name": "BaseBdev2", 00:29:35.520 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:35.520 "is_configured": true, 00:29:35.520 "data_offset": 256, 00:29:35.520 "data_size": 7936 00:29:35.520 } 00:29:35.520 ] 00:29:35.520 }' 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.520 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:36.088 "name": "raid_bdev1", 00:29:36.088 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:36.088 "strip_size_kb": 0, 00:29:36.088 "state": "online", 00:29:36.088 "raid_level": "raid1", 00:29:36.088 "superblock": true, 00:29:36.088 "num_base_bdevs": 2, 00:29:36.088 "num_base_bdevs_discovered": 1, 00:29:36.088 "num_base_bdevs_operational": 1, 00:29:36.088 "base_bdevs_list": [ 00:29:36.088 { 00:29:36.088 "name": null, 00:29:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.088 "is_configured": false, 00:29:36.088 "data_offset": 0, 00:29:36.088 "data_size": 7936 00:29:36.088 }, 00:29:36.088 { 00:29:36.088 "name": "BaseBdev2", 00:29:36.088 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:36.088 "is_configured": true, 00:29:36.088 "data_offset": 256, 00:29:36.088 "data_size": 7936 00:29:36.088 } 00:29:36.088 ] 00:29:36.088 }' 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # local es=0 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:36.088 [2024-10-07 07:50:35.606216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:36.088 [2024-10-07 07:50:35.606549] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:36.088 [2024-10-07 07:50:35.606588] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:36.088 request: 00:29:36.088 { 00:29:36.088 "base_bdev": "BaseBdev1", 00:29:36.088 "raid_bdev": "raid_bdev1", 00:29:36.088 "method": "bdev_raid_add_base_bdev", 00:29:36.088 "req_id": 1 00:29:36.088 } 00:29:36.088 Got JSON-RPC error response 00:29:36.088 response: 00:29:36.088 { 00:29:36.088 "code": -22, 00:29:36.088 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:36.088 } 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@656 -- # es=1 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:36.088 07:50:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.466 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.466 "name": "raid_bdev1", 00:29:37.466 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:37.466 "strip_size_kb": 0, 00:29:37.466 "state": "online", 00:29:37.466 "raid_level": "raid1", 00:29:37.466 "superblock": true, 00:29:37.466 "num_base_bdevs": 2, 00:29:37.466 "num_base_bdevs_discovered": 1, 00:29:37.466 "num_base_bdevs_operational": 1, 00:29:37.466 "base_bdevs_list": [ 00:29:37.466 { 00:29:37.466 "name": null, 00:29:37.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.467 "is_configured": false, 00:29:37.467 "data_offset": 0, 00:29:37.467 "data_size": 7936 00:29:37.467 }, 00:29:37.467 { 00:29:37.467 "name": "BaseBdev2", 00:29:37.467 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:37.467 "is_configured": true, 00:29:37.467 "data_offset": 256, 00:29:37.467 "data_size": 7936 00:29:37.467 } 00:29:37.467 ] 00:29:37.467 }' 00:29:37.467 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.467 07:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:37.725 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:37.725 "name": "raid_bdev1", 00:29:37.726 "uuid": "1fc3fa4c-0bbb-4a67-b3ee-332a215ee063", 00:29:37.726 "strip_size_kb": 0, 00:29:37.726 "state": "online", 00:29:37.726 "raid_level": "raid1", 00:29:37.726 "superblock": true, 00:29:37.726 "num_base_bdevs": 2, 00:29:37.726 "num_base_bdevs_discovered": 1, 00:29:37.726 "num_base_bdevs_operational": 1, 00:29:37.726 "base_bdevs_list": [ 00:29:37.726 { 00:29:37.726 "name": null, 00:29:37.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.726 "is_configured": false, 00:29:37.726 "data_offset": 0, 00:29:37.726 "data_size": 7936 00:29:37.726 }, 00:29:37.726 { 00:29:37.726 "name": "BaseBdev2", 00:29:37.726 "uuid": "5cb69287-f15f-51fc-89f7-9d2369f41699", 00:29:37.726 "is_configured": true, 00:29:37.726 "data_offset": 256, 00:29:37.726 "data_size": 7936 00:29:37.726 } 00:29:37.726 ] 00:29:37.726 }' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86772 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' -z 86772 ']' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # kill -0 86772 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # uname 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 86772 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:37.726 killing process with pid 86772 00:29:37.726 Received shutdown signal, test time was about 60.000000 seconds 00:29:37.726 00:29:37.726 Latency(us) 00:29:37.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.726 =================================================================================================================== 00:29:37.726 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # echo 'killing process with pid 86772' 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # kill 86772 00:29:37.726 07:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@977 -- # wait 86772 00:29:37.726 [2024-10-07 07:50:37.245117] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:37.726 [2024-10-07 07:50:37.245251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:37.726 [2024-10-07 07:50:37.245306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:37.726 [2024-10-07 07:50:37.245327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:38.294 [2024-10-07 07:50:37.562986] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:39.670 07:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:29:39.670 00:29:39.670 real 0m20.322s 00:29:39.670 user 0m26.345s 00:29:39.670 sys 0m2.881s 00:29:39.670 07:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:39.670 07:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:29:39.670 ************************************ 00:29:39.670 END TEST raid_rebuild_test_sb_4k 00:29:39.670 ************************************ 00:29:39.670 07:50:38 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:29:39.670 07:50:38 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:29:39.670 07:50:38 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:29:39.670 07:50:38 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:39.670 07:50:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:39.670 ************************************ 00:29:39.670 START TEST raid_state_function_test_sb_md_separate 00:29:39.670 ************************************ 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 2 true 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87465 00:29:39.670 Process raid pid: 87465 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87465' 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87465 00:29:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # '[' -z 87465 ']' 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:39.670 07:50:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:39.670 [2024-10-07 07:50:39.059940] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:39.670 [2024-10-07 07:50:39.060117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.930 [2024-10-07 07:50:39.241765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.930 [2024-10-07 07:50:39.463349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.189 [2024-10-07 07:50:39.671622] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:40.189 [2024-10-07 07:50:39.671661] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@867 -- # return 0 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:40.447 [2024-10-07 07:50:39.850311] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:40.447 [2024-10-07 07:50:39.850374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:40.447 [2024-10-07 07:50:39.850387] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:40.447 [2024-10-07 07:50:39.850404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:40.447 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:40.448 "name": "Existed_Raid", 00:29:40.448 "uuid": "e35ae7cc-c82d-44d9-8bea-949fbb3b6913", 00:29:40.448 "strip_size_kb": 0, 00:29:40.448 "state": "configuring", 00:29:40.448 "raid_level": "raid1", 00:29:40.448 "superblock": true, 00:29:40.448 "num_base_bdevs": 2, 00:29:40.448 "num_base_bdevs_discovered": 0, 00:29:40.448 "num_base_bdevs_operational": 2, 00:29:40.448 "base_bdevs_list": [ 00:29:40.448 { 00:29:40.448 "name": "BaseBdev1", 00:29:40.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.448 "is_configured": false, 00:29:40.448 "data_offset": 0, 00:29:40.448 "data_size": 0 00:29:40.448 }, 00:29:40.448 { 00:29:40.448 "name": "BaseBdev2", 00:29:40.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.448 "is_configured": false, 00:29:40.448 "data_offset": 0, 00:29:40.448 "data_size": 0 00:29:40.448 } 00:29:40.448 ] 00:29:40.448 }' 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:40.448 07:50:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 [2024-10-07 07:50:40.350305] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:41.015 [2024-10-07 07:50:40.350348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 [2024-10-07 07:50:40.358324] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:41.015 [2024-10-07 07:50:40.358373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:41.015 [2024-10-07 07:50:40.358383] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:41.015 [2024-10-07 07:50:40.358399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 [2024-10-07 07:50:40.418120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:41.015 BaseBdev1 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local i 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.015 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.015 [ 00:29:41.015 { 00:29:41.015 "name": "BaseBdev1", 00:29:41.015 "aliases": [ 00:29:41.015 "840921e4-302e-4bf7-b346-725cddb982fb" 00:29:41.015 ], 00:29:41.015 "product_name": "Malloc disk", 00:29:41.015 "block_size": 4096, 00:29:41.015 "num_blocks": 8192, 00:29:41.016 "uuid": "840921e4-302e-4bf7-b346-725cddb982fb", 00:29:41.016 "md_size": 32, 00:29:41.016 "md_interleave": false, 00:29:41.016 "dif_type": 0, 00:29:41.016 "assigned_rate_limits": { 00:29:41.016 "rw_ios_per_sec": 0, 00:29:41.016 "rw_mbytes_per_sec": 0, 00:29:41.016 "r_mbytes_per_sec": 0, 00:29:41.016 "w_mbytes_per_sec": 0 00:29:41.016 }, 00:29:41.016 "claimed": true, 00:29:41.016 "claim_type": "exclusive_write", 00:29:41.016 "zoned": false, 00:29:41.016 "supported_io_types": { 00:29:41.016 "read": true, 00:29:41.016 "write": true, 00:29:41.016 "unmap": true, 00:29:41.016 "flush": true, 00:29:41.016 "reset": true, 00:29:41.016 "nvme_admin": false, 00:29:41.016 "nvme_io": false, 00:29:41.016 "nvme_io_md": false, 00:29:41.016 "write_zeroes": true, 00:29:41.016 "zcopy": true, 00:29:41.016 "get_zone_info": false, 00:29:41.016 "zone_management": false, 00:29:41.016 "zone_append": false, 00:29:41.016 "compare": false, 00:29:41.016 "compare_and_write": false, 00:29:41.016 "abort": true, 00:29:41.016 "seek_hole": false, 00:29:41.016 "seek_data": false, 00:29:41.016 "copy": true, 00:29:41.016 "nvme_iov_md": false 00:29:41.016 }, 00:29:41.016 "memory_domains": [ 00:29:41.016 { 00:29:41.016 "dma_device_id": "system", 00:29:41.016 "dma_device_type": 1 00:29:41.016 }, 00:29:41.016 { 00:29:41.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:41.016 "dma_device_type": 2 00:29:41.016 } 00:29:41.016 ], 00:29:41.016 "driver_specific": {} 00:29:41.016 } 00:29:41.016 ] 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # return 0 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:41.016 "name": "Existed_Raid", 00:29:41.016 "uuid": "c8a7a9cf-dbc9-4221-8cce-d9a9cfe9bdf8", 00:29:41.016 "strip_size_kb": 0, 00:29:41.016 "state": "configuring", 00:29:41.016 "raid_level": "raid1", 00:29:41.016 "superblock": true, 00:29:41.016 "num_base_bdevs": 2, 00:29:41.016 "num_base_bdevs_discovered": 1, 00:29:41.016 "num_base_bdevs_operational": 2, 00:29:41.016 "base_bdevs_list": [ 00:29:41.016 { 00:29:41.016 "name": "BaseBdev1", 00:29:41.016 "uuid": "840921e4-302e-4bf7-b346-725cddb982fb", 00:29:41.016 "is_configured": true, 00:29:41.016 "data_offset": 256, 00:29:41.016 "data_size": 7936 00:29:41.016 }, 00:29:41.016 { 00:29:41.016 "name": "BaseBdev2", 00:29:41.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.016 "is_configured": false, 00:29:41.016 "data_offset": 0, 00:29:41.016 "data_size": 0 00:29:41.016 } 00:29:41.016 ] 00:29:41.016 }' 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:41.016 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.582 [2024-10-07 07:50:40.910321] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:41.582 [2024-10-07 07:50:40.910382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.582 [2024-10-07 07:50:40.918415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:41.582 [2024-10-07 07:50:40.920812] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:41.582 [2024-10-07 07:50:40.920868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:41.582 "name": "Existed_Raid", 00:29:41.582 "uuid": "fb688c54-1c00-4e66-b717-4be69cddbc8c", 00:29:41.582 "strip_size_kb": 0, 00:29:41.582 "state": "configuring", 00:29:41.582 "raid_level": "raid1", 00:29:41.582 "superblock": true, 00:29:41.582 "num_base_bdevs": 2, 00:29:41.582 "num_base_bdevs_discovered": 1, 00:29:41.582 "num_base_bdevs_operational": 2, 00:29:41.582 "base_bdevs_list": [ 00:29:41.582 { 00:29:41.582 "name": "BaseBdev1", 00:29:41.582 "uuid": "840921e4-302e-4bf7-b346-725cddb982fb", 00:29:41.582 "is_configured": true, 00:29:41.582 "data_offset": 256, 00:29:41.582 "data_size": 7936 00:29:41.582 }, 00:29:41.582 { 00:29:41.582 "name": "BaseBdev2", 00:29:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.582 "is_configured": false, 00:29:41.582 "data_offset": 0, 00:29:41.582 "data_size": 0 00:29:41.582 } 00:29:41.582 ] 00:29:41.582 }' 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:41.582 07:50:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:41.840 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:29:41.840 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:41.840 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.098 [2024-10-07 07:50:41.427145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:42.098 [2024-10-07 07:50:41.427395] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:42.098 [2024-10-07 07:50:41.427411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:42.098 [2024-10-07 07:50:41.427498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:42.098 [2024-10-07 07:50:41.427624] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:42.098 [2024-10-07 07:50:41.427636] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:42.098 [2024-10-07 07:50:41.427759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.098 BaseBdev2 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local i 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.098 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.099 [ 00:29:42.099 { 00:29:42.099 "name": "BaseBdev2", 00:29:42.099 "aliases": [ 00:29:42.099 "f79d4175-b0be-457f-ba9c-014c60c8c3a4" 00:29:42.099 ], 00:29:42.099 "product_name": "Malloc disk", 00:29:42.099 "block_size": 4096, 00:29:42.099 "num_blocks": 8192, 00:29:42.099 "uuid": "f79d4175-b0be-457f-ba9c-014c60c8c3a4", 00:29:42.099 "md_size": 32, 00:29:42.099 "md_interleave": false, 00:29:42.099 "dif_type": 0, 00:29:42.099 "assigned_rate_limits": { 00:29:42.099 "rw_ios_per_sec": 0, 00:29:42.099 "rw_mbytes_per_sec": 0, 00:29:42.099 "r_mbytes_per_sec": 0, 00:29:42.099 "w_mbytes_per_sec": 0 00:29:42.099 }, 00:29:42.099 "claimed": true, 00:29:42.099 "claim_type": "exclusive_write", 00:29:42.099 "zoned": false, 00:29:42.099 "supported_io_types": { 00:29:42.099 "read": true, 00:29:42.099 "write": true, 00:29:42.099 "unmap": true, 00:29:42.099 "flush": true, 00:29:42.099 "reset": true, 00:29:42.099 "nvme_admin": false, 00:29:42.099 "nvme_io": false, 00:29:42.099 "nvme_io_md": false, 00:29:42.099 "write_zeroes": true, 00:29:42.099 "zcopy": true, 00:29:42.099 "get_zone_info": false, 00:29:42.099 "zone_management": false, 00:29:42.099 "zone_append": false, 00:29:42.099 "compare": false, 00:29:42.099 "compare_and_write": false, 00:29:42.099 "abort": true, 00:29:42.099 "seek_hole": false, 00:29:42.099 "seek_data": false, 00:29:42.099 "copy": true, 00:29:42.099 "nvme_iov_md": false 00:29:42.099 }, 00:29:42.099 "memory_domains": [ 00:29:42.099 { 00:29:42.099 "dma_device_id": "system", 00:29:42.099 "dma_device_type": 1 00:29:42.099 }, 00:29:42.099 { 00:29:42.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.099 "dma_device_type": 2 00:29:42.099 } 00:29:42.099 ], 00:29:42.099 "driver_specific": {} 00:29:42.099 } 00:29:42.099 ] 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # return 0 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:42.099 "name": "Existed_Raid", 00:29:42.099 "uuid": "fb688c54-1c00-4e66-b717-4be69cddbc8c", 00:29:42.099 "strip_size_kb": 0, 00:29:42.099 "state": "online", 00:29:42.099 "raid_level": "raid1", 00:29:42.099 "superblock": true, 00:29:42.099 "num_base_bdevs": 2, 00:29:42.099 "num_base_bdevs_discovered": 2, 00:29:42.099 "num_base_bdevs_operational": 2, 00:29:42.099 "base_bdevs_list": [ 00:29:42.099 { 00:29:42.099 "name": "BaseBdev1", 00:29:42.099 "uuid": "840921e4-302e-4bf7-b346-725cddb982fb", 00:29:42.099 "is_configured": true, 00:29:42.099 "data_offset": 256, 00:29:42.099 "data_size": 7936 00:29:42.099 }, 00:29:42.099 { 00:29:42.099 "name": "BaseBdev2", 00:29:42.099 "uuid": "f79d4175-b0be-457f-ba9c-014c60c8c3a4", 00:29:42.099 "is_configured": true, 00:29:42.099 "data_offset": 256, 00:29:42.099 "data_size": 7936 00:29:42.099 } 00:29:42.099 ] 00:29:42.099 }' 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:42.099 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:42.665 [2024-10-07 07:50:41.947667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.665 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.665 "name": "Existed_Raid", 00:29:42.665 "aliases": [ 00:29:42.665 "fb688c54-1c00-4e66-b717-4be69cddbc8c" 00:29:42.665 ], 00:29:42.665 "product_name": "Raid Volume", 00:29:42.665 "block_size": 4096, 00:29:42.665 "num_blocks": 7936, 00:29:42.666 "uuid": "fb688c54-1c00-4e66-b717-4be69cddbc8c", 00:29:42.666 "md_size": 32, 00:29:42.666 "md_interleave": false, 00:29:42.666 "dif_type": 0, 00:29:42.666 "assigned_rate_limits": { 00:29:42.666 "rw_ios_per_sec": 0, 00:29:42.666 "rw_mbytes_per_sec": 0, 00:29:42.666 "r_mbytes_per_sec": 0, 00:29:42.666 "w_mbytes_per_sec": 0 00:29:42.666 }, 00:29:42.666 "claimed": false, 00:29:42.666 "zoned": false, 00:29:42.666 "supported_io_types": { 00:29:42.666 "read": true, 00:29:42.666 "write": true, 00:29:42.666 "unmap": false, 00:29:42.666 "flush": false, 00:29:42.666 "reset": true, 00:29:42.666 "nvme_admin": false, 00:29:42.666 "nvme_io": false, 00:29:42.666 "nvme_io_md": false, 00:29:42.666 "write_zeroes": true, 00:29:42.666 "zcopy": false, 00:29:42.666 "get_zone_info": false, 00:29:42.666 "zone_management": false, 00:29:42.666 "zone_append": false, 00:29:42.666 "compare": false, 00:29:42.666 "compare_and_write": false, 00:29:42.666 "abort": false, 00:29:42.666 "seek_hole": false, 00:29:42.666 "seek_data": false, 00:29:42.666 "copy": false, 00:29:42.666 "nvme_iov_md": false 00:29:42.666 }, 00:29:42.666 "memory_domains": [ 00:29:42.666 { 00:29:42.666 "dma_device_id": "system", 00:29:42.666 "dma_device_type": 1 00:29:42.666 }, 00:29:42.666 { 00:29:42.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.666 "dma_device_type": 2 00:29:42.666 }, 00:29:42.666 { 00:29:42.666 "dma_device_id": "system", 00:29:42.666 "dma_device_type": 1 00:29:42.666 }, 00:29:42.666 { 00:29:42.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.666 "dma_device_type": 2 00:29:42.666 } 00:29:42.666 ], 00:29:42.666 "driver_specific": { 00:29:42.666 "raid": { 00:29:42.666 "uuid": "fb688c54-1c00-4e66-b717-4be69cddbc8c", 00:29:42.666 "strip_size_kb": 0, 00:29:42.666 "state": "online", 00:29:42.666 "raid_level": "raid1", 00:29:42.666 "superblock": true, 00:29:42.666 "num_base_bdevs": 2, 00:29:42.666 "num_base_bdevs_discovered": 2, 00:29:42.666 "num_base_bdevs_operational": 2, 00:29:42.666 "base_bdevs_list": [ 00:29:42.666 { 00:29:42.666 "name": "BaseBdev1", 00:29:42.666 "uuid": "840921e4-302e-4bf7-b346-725cddb982fb", 00:29:42.666 "is_configured": true, 00:29:42.666 "data_offset": 256, 00:29:42.666 "data_size": 7936 00:29:42.666 }, 00:29:42.666 { 00:29:42.666 "name": "BaseBdev2", 00:29:42.666 "uuid": "f79d4175-b0be-457f-ba9c-014c60c8c3a4", 00:29:42.666 "is_configured": true, 00:29:42.666 "data_offset": 256, 00:29:42.666 "data_size": 7936 00:29:42.666 } 00:29:42.666 ] 00:29:42.666 } 00:29:42.666 } 00:29:42.666 }' 00:29:42.666 07:50:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:42.666 BaseBdev2' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.666 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.666 [2024-10-07 07:50:42.179481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:42.925 "name": "Existed_Raid", 00:29:42.925 "uuid": "fb688c54-1c00-4e66-b717-4be69cddbc8c", 00:29:42.925 "strip_size_kb": 0, 00:29:42.925 "state": "online", 00:29:42.925 "raid_level": "raid1", 00:29:42.925 "superblock": true, 00:29:42.925 "num_base_bdevs": 2, 00:29:42.925 "num_base_bdevs_discovered": 1, 00:29:42.925 "num_base_bdevs_operational": 1, 00:29:42.925 "base_bdevs_list": [ 00:29:42.925 { 00:29:42.925 "name": null, 00:29:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.925 "is_configured": false, 00:29:42.925 "data_offset": 0, 00:29:42.925 "data_size": 7936 00:29:42.925 }, 00:29:42.925 { 00:29:42.925 "name": "BaseBdev2", 00:29:42.925 "uuid": "f79d4175-b0be-457f-ba9c-014c60c8c3a4", 00:29:42.925 "is_configured": true, 00:29:42.925 "data_offset": 256, 00:29:42.925 "data_size": 7936 00:29:42.925 } 00:29:42.925 ] 00:29:42.925 }' 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:42.925 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 [2024-10-07 07:50:42.803345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:43.493 [2024-10-07 07:50:42.803449] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:43.493 [2024-10-07 07:50:42.915328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:43.493 [2024-10-07 07:50:42.915384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:43.493 [2024-10-07 07:50:42.915399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87465 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' -z 87465 ']' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # kill -0 87465 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # uname 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:43.493 07:50:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 87465 00:29:43.493 07:50:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:43.493 killing process with pid 87465 00:29:43.493 07:50:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:43.493 07:50:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # echo 'killing process with pid 87465' 00:29:43.493 07:50:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # kill 87465 00:29:43.493 [2024-10-07 07:50:43.005431] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:43.493 07:50:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@977 -- # wait 87465 00:29:43.493 [2024-10-07 07:50:43.023500] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.868 07:50:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:29:44.868 00:29:44.868 real 0m5.422s 00:29:44.868 user 0m7.695s 00:29:44.868 sys 0m0.969s 00:29:44.868 07:50:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:44.869 ************************************ 00:29:44.869 END TEST raid_state_function_test_sb_md_separate 00:29:44.869 ************************************ 00:29:44.869 07:50:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:44.869 07:50:44 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:29:44.869 07:50:44 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:29:44.869 07:50:44 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:44.869 07:50:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:44.869 ************************************ 00:29:44.869 START TEST raid_superblock_test_md_separate 00:29:44.869 ************************************ 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 2 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87716 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87716 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # '[' -z 87716 ']' 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:44.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:44.869 07:50:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:45.127 [2024-10-07 07:50:44.542981] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:45.127 [2024-10-07 07:50:44.543153] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87716 ] 00:29:45.387 [2024-10-07 07:50:44.728499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.387 [2024-10-07 07:50:44.945581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.645 [2024-10-07 07:50:45.161681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.645 [2024-10-07 07:50:45.161940] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@867 -- # return 0 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:45.903 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 malloc1 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 [2024-10-07 07:50:45.481359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:46.163 [2024-10-07 07:50:45.481431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.163 [2024-10-07 07:50:45.481463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:46.163 [2024-10-07 07:50:45.481477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.163 [2024-10-07 07:50:45.483824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.163 [2024-10-07 07:50:45.483863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:46.163 pt1 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 malloc2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 [2024-10-07 07:50:45.549355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:46.163 [2024-10-07 07:50:45.549421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.163 [2024-10-07 07:50:45.549452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:46.163 [2024-10-07 07:50:45.549465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.163 [2024-10-07 07:50:45.551874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.163 [2024-10-07 07:50:45.551915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:46.163 pt2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 [2024-10-07 07:50:45.557432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:46.163 [2024-10-07 07:50:45.559749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:46.163 [2024-10-07 07:50:45.560080] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:46.163 [2024-10-07 07:50:45.560101] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:46.163 [2024-10-07 07:50:45.560202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:46.163 [2024-10-07 07:50:45.560341] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:46.163 [2024-10-07 07:50:45.560355] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:46.163 [2024-10-07 07:50:45.560512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.163 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.164 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.164 "name": "raid_bdev1", 00:29:46.164 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:46.164 "strip_size_kb": 0, 00:29:46.164 "state": "online", 00:29:46.164 "raid_level": "raid1", 00:29:46.164 "superblock": true, 00:29:46.164 "num_base_bdevs": 2, 00:29:46.164 "num_base_bdevs_discovered": 2, 00:29:46.164 "num_base_bdevs_operational": 2, 00:29:46.164 "base_bdevs_list": [ 00:29:46.164 { 00:29:46.164 "name": "pt1", 00:29:46.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:46.164 "is_configured": true, 00:29:46.164 "data_offset": 256, 00:29:46.164 "data_size": 7936 00:29:46.164 }, 00:29:46.164 { 00:29:46.164 "name": "pt2", 00:29:46.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:46.164 "is_configured": true, 00:29:46.164 "data_offset": 256, 00:29:46.164 "data_size": 7936 00:29:46.164 } 00:29:46.164 ] 00:29:46.164 }' 00:29:46.164 07:50:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.164 07:50:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.731 [2024-10-07 07:50:46.025768] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.731 "name": "raid_bdev1", 00:29:46.731 "aliases": [ 00:29:46.731 "59ade0cc-aba9-4d8a-900c-1522323e0ed7" 00:29:46.731 ], 00:29:46.731 "product_name": "Raid Volume", 00:29:46.731 "block_size": 4096, 00:29:46.731 "num_blocks": 7936, 00:29:46.731 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:46.731 "md_size": 32, 00:29:46.731 "md_interleave": false, 00:29:46.731 "dif_type": 0, 00:29:46.731 "assigned_rate_limits": { 00:29:46.731 "rw_ios_per_sec": 0, 00:29:46.731 "rw_mbytes_per_sec": 0, 00:29:46.731 "r_mbytes_per_sec": 0, 00:29:46.731 "w_mbytes_per_sec": 0 00:29:46.731 }, 00:29:46.731 "claimed": false, 00:29:46.731 "zoned": false, 00:29:46.731 "supported_io_types": { 00:29:46.731 "read": true, 00:29:46.731 "write": true, 00:29:46.731 "unmap": false, 00:29:46.731 "flush": false, 00:29:46.731 "reset": true, 00:29:46.731 "nvme_admin": false, 00:29:46.731 "nvme_io": false, 00:29:46.731 "nvme_io_md": false, 00:29:46.731 "write_zeroes": true, 00:29:46.731 "zcopy": false, 00:29:46.731 "get_zone_info": false, 00:29:46.731 "zone_management": false, 00:29:46.731 "zone_append": false, 00:29:46.731 "compare": false, 00:29:46.731 "compare_and_write": false, 00:29:46.731 "abort": false, 00:29:46.731 "seek_hole": false, 00:29:46.731 "seek_data": false, 00:29:46.731 "copy": false, 00:29:46.731 "nvme_iov_md": false 00:29:46.731 }, 00:29:46.731 "memory_domains": [ 00:29:46.731 { 00:29:46.731 "dma_device_id": "system", 00:29:46.731 "dma_device_type": 1 00:29:46.731 }, 00:29:46.731 { 00:29:46.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:46.731 "dma_device_type": 2 00:29:46.731 }, 00:29:46.731 { 00:29:46.731 "dma_device_id": "system", 00:29:46.731 "dma_device_type": 1 00:29:46.731 }, 00:29:46.731 { 00:29:46.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:46.731 "dma_device_type": 2 00:29:46.731 } 00:29:46.731 ], 00:29:46.731 "driver_specific": { 00:29:46.731 "raid": { 00:29:46.731 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:46.731 "strip_size_kb": 0, 00:29:46.731 "state": "online", 00:29:46.731 "raid_level": "raid1", 00:29:46.731 "superblock": true, 00:29:46.731 "num_base_bdevs": 2, 00:29:46.731 "num_base_bdevs_discovered": 2, 00:29:46.731 "num_base_bdevs_operational": 2, 00:29:46.731 "base_bdevs_list": [ 00:29:46.731 { 00:29:46.731 "name": "pt1", 00:29:46.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:46.731 "is_configured": true, 00:29:46.731 "data_offset": 256, 00:29:46.731 "data_size": 7936 00:29:46.731 }, 00:29:46.731 { 00:29:46.731 "name": "pt2", 00:29:46.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:46.731 "is_configured": true, 00:29:46.731 "data_offset": 256, 00:29:46.731 "data_size": 7936 00:29:46.731 } 00:29:46.731 ] 00:29:46.731 } 00:29:46.731 } 00:29:46.731 }' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:46.731 pt2' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:46.731 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.732 [2024-10-07 07:50:46.249792] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=59ade0cc-aba9-4d8a-900c-1522323e0ed7 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 59ade0cc-aba9-4d8a-900c-1522323e0ed7 ']' 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.732 [2024-10-07 07:50:46.281497] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:46.732 [2024-10-07 07:50:46.281526] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:46.732 [2024-10-07 07:50:46.281608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:46.732 [2024-10-07 07:50:46.281671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:46.732 [2024-10-07 07:50:46.281686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.732 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # local es=0 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.990 [2024-10-07 07:50:46.405585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:46.990 [2024-10-07 07:50:46.407983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:46.990 [2024-10-07 07:50:46.408108] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:46.990 [2024-10-07 07:50:46.408288] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:46.990 [2024-10-07 07:50:46.408360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:46.990 [2024-10-07 07:50:46.408455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:46.990 request: 00:29:46.990 { 00:29:46.990 "name": "raid_bdev1", 00:29:46.990 "raid_level": "raid1", 00:29:46.990 "base_bdevs": [ 00:29:46.990 "malloc1", 00:29:46.990 "malloc2" 00:29:46.990 ], 00:29:46.990 "superblock": false, 00:29:46.990 "method": "bdev_raid_create", 00:29:46.990 "req_id": 1 00:29:46.990 } 00:29:46.990 Got JSON-RPC error response 00:29:46.990 response: 00:29:46.990 { 00:29:46.990 "code": -17, 00:29:46.990 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:46.990 } 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@656 -- # es=1 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:46.990 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.991 [2024-10-07 07:50:46.461538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:46.991 [2024-10-07 07:50:46.461603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.991 [2024-10-07 07:50:46.461623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:46.991 [2024-10-07 07:50:46.461638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.991 [2024-10-07 07:50:46.463877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.991 [2024-10-07 07:50:46.463920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:46.991 [2024-10-07 07:50:46.463975] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:46.991 [2024-10-07 07:50:46.464027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:46.991 pt1 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.991 "name": "raid_bdev1", 00:29:46.991 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:46.991 "strip_size_kb": 0, 00:29:46.991 "state": "configuring", 00:29:46.991 "raid_level": "raid1", 00:29:46.991 "superblock": true, 00:29:46.991 "num_base_bdevs": 2, 00:29:46.991 "num_base_bdevs_discovered": 1, 00:29:46.991 "num_base_bdevs_operational": 2, 00:29:46.991 "base_bdevs_list": [ 00:29:46.991 { 00:29:46.991 "name": "pt1", 00:29:46.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:46.991 "is_configured": true, 00:29:46.991 "data_offset": 256, 00:29:46.991 "data_size": 7936 00:29:46.991 }, 00:29:46.991 { 00:29:46.991 "name": null, 00:29:46.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:46.991 "is_configured": false, 00:29:46.991 "data_offset": 256, 00:29:46.991 "data_size": 7936 00:29:46.991 } 00:29:46.991 ] 00:29:46.991 }' 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.991 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:47.557 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:47.557 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:47.558 [2024-10-07 07:50:46.901642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:47.558 [2024-10-07 07:50:46.901733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.558 [2024-10-07 07:50:46.901760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:47.558 [2024-10-07 07:50:46.901776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.558 [2024-10-07 07:50:46.902040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.558 [2024-10-07 07:50:46.902068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:47.558 [2024-10-07 07:50:46.902136] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:47.558 [2024-10-07 07:50:46.902162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:47.558 [2024-10-07 07:50:46.902290] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:47.558 [2024-10-07 07:50:46.902305] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:47.558 [2024-10-07 07:50:46.902381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:47.558 [2024-10-07 07:50:46.902515] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:47.558 [2024-10-07 07:50:46.902525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:47.558 [2024-10-07 07:50:46.902655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.558 pt2 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:47.558 "name": "raid_bdev1", 00:29:47.558 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:47.558 "strip_size_kb": 0, 00:29:47.558 "state": "online", 00:29:47.558 "raid_level": "raid1", 00:29:47.558 "superblock": true, 00:29:47.558 "num_base_bdevs": 2, 00:29:47.558 "num_base_bdevs_discovered": 2, 00:29:47.558 "num_base_bdevs_operational": 2, 00:29:47.558 "base_bdevs_list": [ 00:29:47.558 { 00:29:47.558 "name": "pt1", 00:29:47.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:47.558 "is_configured": true, 00:29:47.558 "data_offset": 256, 00:29:47.558 "data_size": 7936 00:29:47.558 }, 00:29:47.558 { 00:29:47.558 "name": "pt2", 00:29:47.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:47.558 "is_configured": true, 00:29:47.558 "data_offset": 256, 00:29:47.558 "data_size": 7936 00:29:47.558 } 00:29:47.558 ] 00:29:47.558 }' 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.558 07:50:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:47.815 [2024-10-07 07:50:47.334027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:47.815 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.073 "name": "raid_bdev1", 00:29:48.073 "aliases": [ 00:29:48.073 "59ade0cc-aba9-4d8a-900c-1522323e0ed7" 00:29:48.073 ], 00:29:48.073 "product_name": "Raid Volume", 00:29:48.073 "block_size": 4096, 00:29:48.073 "num_blocks": 7936, 00:29:48.073 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:48.073 "md_size": 32, 00:29:48.073 "md_interleave": false, 00:29:48.073 "dif_type": 0, 00:29:48.073 "assigned_rate_limits": { 00:29:48.073 "rw_ios_per_sec": 0, 00:29:48.073 "rw_mbytes_per_sec": 0, 00:29:48.073 "r_mbytes_per_sec": 0, 00:29:48.073 "w_mbytes_per_sec": 0 00:29:48.073 }, 00:29:48.073 "claimed": false, 00:29:48.073 "zoned": false, 00:29:48.073 "supported_io_types": { 00:29:48.073 "read": true, 00:29:48.073 "write": true, 00:29:48.073 "unmap": false, 00:29:48.073 "flush": false, 00:29:48.073 "reset": true, 00:29:48.073 "nvme_admin": false, 00:29:48.073 "nvme_io": false, 00:29:48.073 "nvme_io_md": false, 00:29:48.073 "write_zeroes": true, 00:29:48.073 "zcopy": false, 00:29:48.073 "get_zone_info": false, 00:29:48.073 "zone_management": false, 00:29:48.073 "zone_append": false, 00:29:48.073 "compare": false, 00:29:48.073 "compare_and_write": false, 00:29:48.073 "abort": false, 00:29:48.073 "seek_hole": false, 00:29:48.073 "seek_data": false, 00:29:48.073 "copy": false, 00:29:48.073 "nvme_iov_md": false 00:29:48.073 }, 00:29:48.073 "memory_domains": [ 00:29:48.073 { 00:29:48.073 "dma_device_id": "system", 00:29:48.073 "dma_device_type": 1 00:29:48.073 }, 00:29:48.073 { 00:29:48.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:48.073 "dma_device_type": 2 00:29:48.073 }, 00:29:48.073 { 00:29:48.073 "dma_device_id": "system", 00:29:48.073 "dma_device_type": 1 00:29:48.073 }, 00:29:48.073 { 00:29:48.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:48.073 "dma_device_type": 2 00:29:48.073 } 00:29:48.073 ], 00:29:48.073 "driver_specific": { 00:29:48.073 "raid": { 00:29:48.073 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:48.073 "strip_size_kb": 0, 00:29:48.073 "state": "online", 00:29:48.073 "raid_level": "raid1", 00:29:48.073 "superblock": true, 00:29:48.073 "num_base_bdevs": 2, 00:29:48.073 "num_base_bdevs_discovered": 2, 00:29:48.073 "num_base_bdevs_operational": 2, 00:29:48.073 "base_bdevs_list": [ 00:29:48.073 { 00:29:48.073 "name": "pt1", 00:29:48.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:48.073 "is_configured": true, 00:29:48.073 "data_offset": 256, 00:29:48.073 "data_size": 7936 00:29:48.073 }, 00:29:48.073 { 00:29:48.073 "name": "pt2", 00:29:48.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:48.073 "is_configured": true, 00:29:48.073 "data_offset": 256, 00:29:48.073 "data_size": 7936 00:29:48.073 } 00:29:48.073 ] 00:29:48.073 } 00:29:48.073 } 00:29:48.073 }' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:48.073 pt2' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.073 [2024-10-07 07:50:47.554058] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 59ade0cc-aba9-4d8a-900c-1522323e0ed7 '!=' 59ade0cc-aba9-4d8a-900c-1522323e0ed7 ']' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.073 [2024-10-07 07:50:47.601869] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.073 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.330 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:48.330 "name": "raid_bdev1", 00:29:48.330 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:48.330 "strip_size_kb": 0, 00:29:48.330 "state": "online", 00:29:48.330 "raid_level": "raid1", 00:29:48.330 "superblock": true, 00:29:48.330 "num_base_bdevs": 2, 00:29:48.330 "num_base_bdevs_discovered": 1, 00:29:48.330 "num_base_bdevs_operational": 1, 00:29:48.330 "base_bdevs_list": [ 00:29:48.330 { 00:29:48.330 "name": null, 00:29:48.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.330 "is_configured": false, 00:29:48.330 "data_offset": 0, 00:29:48.330 "data_size": 7936 00:29:48.330 }, 00:29:48.330 { 00:29:48.330 "name": "pt2", 00:29:48.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:48.330 "is_configured": true, 00:29:48.330 "data_offset": 256, 00:29:48.330 "data_size": 7936 00:29:48.330 } 00:29:48.330 ] 00:29:48.330 }' 00:29:48.330 07:50:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:48.330 07:50:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.587 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:48.587 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.587 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.587 [2024-10-07 07:50:48.033933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:48.587 [2024-10-07 07:50:48.033963] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:48.587 [2024-10-07 07:50:48.034042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:48.588 [2024-10-07 07:50:48.034091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:48.588 [2024-10-07 07:50:48.034105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 [2024-10-07 07:50:48.097946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:48.588 [2024-10-07 07:50:48.098162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:48.588 [2024-10-07 07:50:48.098225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:48.588 [2024-10-07 07:50:48.098325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:48.588 [2024-10-07 07:50:48.100892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:48.588 [2024-10-07 07:50:48.101071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:48.588 [2024-10-07 07:50:48.101232] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:48.588 [2024-10-07 07:50:48.101331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:48.588 [2024-10-07 07:50:48.101544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:48.588 [2024-10-07 07:50:48.101594] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:48.588 [2024-10-07 07:50:48.101717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:48.588 [2024-10-07 07:50:48.101884] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:48.588 [2024-10-07 07:50:48.101924] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:48.588 [2024-10-07 07:50:48.102142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:48.588 pt2 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:48.588 "name": "raid_bdev1", 00:29:48.588 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:48.588 "strip_size_kb": 0, 00:29:48.588 "state": "online", 00:29:48.588 "raid_level": "raid1", 00:29:48.588 "superblock": true, 00:29:48.588 "num_base_bdevs": 2, 00:29:48.588 "num_base_bdevs_discovered": 1, 00:29:48.588 "num_base_bdevs_operational": 1, 00:29:48.588 "base_bdevs_list": [ 00:29:48.588 { 00:29:48.588 "name": null, 00:29:48.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.588 "is_configured": false, 00:29:48.588 "data_offset": 256, 00:29:48.588 "data_size": 7936 00:29:48.588 }, 00:29:48.588 { 00:29:48.588 "name": "pt2", 00:29:48.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:48.588 "is_configured": true, 00:29:48.588 "data_offset": 256, 00:29:48.588 "data_size": 7936 00:29:48.588 } 00:29:48.588 ] 00:29:48.588 }' 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:48.588 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.154 [2024-10-07 07:50:48.570182] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:49.154 [2024-10-07 07:50:48.570215] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:49.154 [2024-10-07 07:50:48.570293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:49.154 [2024-10-07 07:50:48.570347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:49.154 [2024-10-07 07:50:48.570359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.154 [2024-10-07 07:50:48.630221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:49.154 [2024-10-07 07:50:48.630284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:49.154 [2024-10-07 07:50:48.630306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:29:49.154 [2024-10-07 07:50:48.630318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:49.154 [2024-10-07 07:50:48.632697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:49.154 [2024-10-07 07:50:48.632743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:49.154 [2024-10-07 07:50:48.632812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:49.154 [2024-10-07 07:50:48.632856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:49.154 [2024-10-07 07:50:48.632981] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:49.154 [2024-10-07 07:50:48.632992] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:49.154 [2024-10-07 07:50:48.633016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:49.154 [2024-10-07 07:50:48.633104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:49.154 [2024-10-07 07:50:48.633180] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:49.154 [2024-10-07 07:50:48.633191] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:49.154 [2024-10-07 07:50:48.633267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:49.154 [2024-10-07 07:50:48.633377] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:49.154 [2024-10-07 07:50:48.633389] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:49.154 [2024-10-07 07:50:48.633508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.154 pt1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:49.154 "name": "raid_bdev1", 00:29:49.154 "uuid": "59ade0cc-aba9-4d8a-900c-1522323e0ed7", 00:29:49.154 "strip_size_kb": 0, 00:29:49.154 "state": "online", 00:29:49.154 "raid_level": "raid1", 00:29:49.154 "superblock": true, 00:29:49.154 "num_base_bdevs": 2, 00:29:49.154 "num_base_bdevs_discovered": 1, 00:29:49.154 "num_base_bdevs_operational": 1, 00:29:49.154 "base_bdevs_list": [ 00:29:49.154 { 00:29:49.154 "name": null, 00:29:49.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.154 "is_configured": false, 00:29:49.154 "data_offset": 256, 00:29:49.154 "data_size": 7936 00:29:49.154 }, 00:29:49.154 { 00:29:49.154 "name": "pt2", 00:29:49.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:49.154 "is_configured": true, 00:29:49.154 "data_offset": 256, 00:29:49.154 "data_size": 7936 00:29:49.154 } 00:29:49.154 ] 00:29:49.154 }' 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:49.154 07:50:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:49.720 [2024-10-07 07:50:49.122591] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:49.720 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 59ade0cc-aba9-4d8a-900c-1522323e0ed7 '!=' 59ade0cc-aba9-4d8a-900c-1522323e0ed7 ']' 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87716 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' -z 87716 ']' 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # kill -0 87716 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # uname 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 87716 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # echo 'killing process with pid 87716' 00:29:49.721 killing process with pid 87716 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # kill 87716 00:29:49.721 [2024-10-07 07:50:49.202184] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:49.721 07:50:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@977 -- # wait 87716 00:29:49.721 [2024-10-07 07:50:49.202418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:49.721 [2024-10-07 07:50:49.202473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:49.721 [2024-10-07 07:50:49.202490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:49.980 [2024-10-07 07:50:49.434790] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:51.358 07:50:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:29:51.358 ************************************ 00:29:51.358 END TEST raid_superblock_test_md_separate 00:29:51.358 ************************************ 00:29:51.358 00:29:51.358 real 0m6.324s 00:29:51.358 user 0m9.382s 00:29:51.358 sys 0m1.211s 00:29:51.358 07:50:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # xtrace_disable 00:29:51.358 07:50:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:51.358 07:50:50 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:29:51.358 07:50:50 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:29:51.358 07:50:50 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:29:51.358 07:50:50 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:29:51.358 07:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:51.358 ************************************ 00:29:51.358 START TEST raid_rebuild_test_sb_md_separate 00:29:51.358 ************************************ 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 true false true 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88046 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88046 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # '[' -z 88046 ']' 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local max_retries=100 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@843 -- # xtrace_disable 00:29:51.358 07:50:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:51.358 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:51.358 Zero copy mechanism will not be used. 00:29:51.358 [2024-10-07 07:50:50.911849] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:29:51.358 [2024-10-07 07:50:50.911989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88046 ] 00:29:51.617 [2024-10-07 07:50:51.073252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.875 [2024-10-07 07:50:51.290498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.134 [2024-10-07 07:50:51.493993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:52.134 [2024-10-07 07:50:51.494180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # return 0 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.394 BaseBdev1_malloc 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.394 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.394 [2024-10-07 07:50:51.782149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:52.394 [2024-10-07 07:50:51.782212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.395 [2024-10-07 07:50:51.782243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:52.395 [2024-10-07 07:50:51.782258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.395 [2024-10-07 07:50:51.784437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.395 [2024-10-07 07:50:51.784593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:52.395 BaseBdev1 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 BaseBdev2_malloc 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 [2024-10-07 07:50:51.848293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:52.395 [2024-10-07 07:50:51.848496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.395 [2024-10-07 07:50:51.848525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:52.395 [2024-10-07 07:50:51.848540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.395 [2024-10-07 07:50:51.850685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.395 [2024-10-07 07:50:51.850740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:52.395 BaseBdev2 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 spare_malloc 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 spare_delay 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 [2024-10-07 07:50:51.911671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:52.395 [2024-10-07 07:50:51.911746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.395 [2024-10-07 07:50:51.911769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:52.395 [2024-10-07 07:50:51.911783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.395 [2024-10-07 07:50:51.913950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.395 [2024-10-07 07:50:51.913994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:52.395 spare 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 [2024-10-07 07:50:51.919749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:52.395 [2024-10-07 07:50:51.921829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:52.395 [2024-10-07 07:50:51.922007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:52.395 [2024-10-07 07:50:51.922024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:52.395 [2024-10-07 07:50:51.922095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:52.395 [2024-10-07 07:50:51.922221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:52.395 [2024-10-07 07:50:51.922231] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:52.395 [2024-10-07 07:50:51.922350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.395 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.673 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.673 "name": "raid_bdev1", 00:29:52.673 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:52.673 "strip_size_kb": 0, 00:29:52.673 "state": "online", 00:29:52.673 "raid_level": "raid1", 00:29:52.673 "superblock": true, 00:29:52.673 "num_base_bdevs": 2, 00:29:52.673 "num_base_bdevs_discovered": 2, 00:29:52.673 "num_base_bdevs_operational": 2, 00:29:52.673 "base_bdevs_list": [ 00:29:52.673 { 00:29:52.673 "name": "BaseBdev1", 00:29:52.673 "uuid": "e0c6c77c-6333-58c6-9aea-3ef28e72d669", 00:29:52.673 "is_configured": true, 00:29:52.673 "data_offset": 256, 00:29:52.673 "data_size": 7936 00:29:52.673 }, 00:29:52.673 { 00:29:52.673 "name": "BaseBdev2", 00:29:52.673 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:52.673 "is_configured": true, 00:29:52.673 "data_offset": 256, 00:29:52.673 "data_size": 7936 00:29:52.673 } 00:29:52.673 ] 00:29:52.673 }' 00:29:52.673 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.673 07:50:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.942 [2024-10-07 07:50:52.332094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:52.942 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:53.201 [2024-10-07 07:50:52.651955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:53.201 /dev/nbd0 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local i 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # break 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.201 1+0 records in 00:29:53.201 1+0 records out 00:29:53.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562555 s, 7.3 MB/s 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # size=4096 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # return 0 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:53.201 07:50:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:29:54.137 7936+0 records in 00:29:54.137 7936+0 records out 00:29:54.137 32505856 bytes (33 MB, 31 MiB) copied, 0.772115 s, 42.1 MB/s 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:54.137 [2024-10-07 07:50:53.681078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.137 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:54.396 [2024-10-07 07:50:53.705170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.396 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.396 "name": "raid_bdev1", 00:29:54.396 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:54.396 "strip_size_kb": 0, 00:29:54.396 "state": "online", 00:29:54.396 "raid_level": "raid1", 00:29:54.397 "superblock": true, 00:29:54.397 "num_base_bdevs": 2, 00:29:54.397 "num_base_bdevs_discovered": 1, 00:29:54.397 "num_base_bdevs_operational": 1, 00:29:54.397 "base_bdevs_list": [ 00:29:54.397 { 00:29:54.397 "name": null, 00:29:54.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.397 "is_configured": false, 00:29:54.397 "data_offset": 0, 00:29:54.397 "data_size": 7936 00:29:54.397 }, 00:29:54.397 { 00:29:54.397 "name": "BaseBdev2", 00:29:54.397 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:54.397 "is_configured": true, 00:29:54.397 "data_offset": 256, 00:29:54.397 "data_size": 7936 00:29:54.397 } 00:29:54.397 ] 00:29:54.397 }' 00:29:54.397 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.397 07:50:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:54.655 07:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:54.655 07:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:54.655 07:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:54.655 [2024-10-07 07:50:54.173303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.655 [2024-10-07 07:50:54.186448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:29:54.655 07:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:54.655 07:50:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:54.655 [2024-10-07 07:50:54.188561] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:56.031 "name": "raid_bdev1", 00:29:56.031 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:56.031 "strip_size_kb": 0, 00:29:56.031 "state": "online", 00:29:56.031 "raid_level": "raid1", 00:29:56.031 "superblock": true, 00:29:56.031 "num_base_bdevs": 2, 00:29:56.031 "num_base_bdevs_discovered": 2, 00:29:56.031 "num_base_bdevs_operational": 2, 00:29:56.031 "process": { 00:29:56.031 "type": "rebuild", 00:29:56.031 "target": "spare", 00:29:56.031 "progress": { 00:29:56.031 "blocks": 2560, 00:29:56.031 "percent": 32 00:29:56.031 } 00:29:56.031 }, 00:29:56.031 "base_bdevs_list": [ 00:29:56.031 { 00:29:56.031 "name": "spare", 00:29:56.031 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:29:56.031 "is_configured": true, 00:29:56.031 "data_offset": 256, 00:29:56.031 "data_size": 7936 00:29:56.031 }, 00:29:56.031 { 00:29:56.031 "name": "BaseBdev2", 00:29:56.031 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:56.031 "is_configured": true, 00:29:56.031 "data_offset": 256, 00:29:56.031 "data_size": 7936 00:29:56.031 } 00:29:56.031 ] 00:29:56.031 }' 00:29:56.031 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.032 [2024-10-07 07:50:55.334517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.032 [2024-10-07 07:50:55.396370] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:56.032 [2024-10-07 07:50:55.396448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.032 [2024-10-07 07:50:55.396465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.032 [2024-10-07 07:50:55.396477] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:56.032 "name": "raid_bdev1", 00:29:56.032 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:56.032 "strip_size_kb": 0, 00:29:56.032 "state": "online", 00:29:56.032 "raid_level": "raid1", 00:29:56.032 "superblock": true, 00:29:56.032 "num_base_bdevs": 2, 00:29:56.032 "num_base_bdevs_discovered": 1, 00:29:56.032 "num_base_bdevs_operational": 1, 00:29:56.032 "base_bdevs_list": [ 00:29:56.032 { 00:29:56.032 "name": null, 00:29:56.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.032 "is_configured": false, 00:29:56.032 "data_offset": 0, 00:29:56.032 "data_size": 7936 00:29:56.032 }, 00:29:56.032 { 00:29:56.032 "name": "BaseBdev2", 00:29:56.032 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:56.032 "is_configured": true, 00:29:56.032 "data_offset": 256, 00:29:56.032 "data_size": 7936 00:29:56.032 } 00:29:56.032 ] 00:29:56.032 }' 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:56.032 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:56.600 "name": "raid_bdev1", 00:29:56.600 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:56.600 "strip_size_kb": 0, 00:29:56.600 "state": "online", 00:29:56.600 "raid_level": "raid1", 00:29:56.600 "superblock": true, 00:29:56.600 "num_base_bdevs": 2, 00:29:56.600 "num_base_bdevs_discovered": 1, 00:29:56.600 "num_base_bdevs_operational": 1, 00:29:56.600 "base_bdevs_list": [ 00:29:56.600 { 00:29:56.600 "name": null, 00:29:56.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.600 "is_configured": false, 00:29:56.600 "data_offset": 0, 00:29:56.600 "data_size": 7936 00:29:56.600 }, 00:29:56.600 { 00:29:56.600 "name": "BaseBdev2", 00:29:56.600 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:56.600 "is_configured": true, 00:29:56.600 "data_offset": 256, 00:29:56.600 "data_size": 7936 00:29:56.600 } 00:29:56.600 ] 00:29:56.600 }' 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:56.600 07:50:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:56.600 07:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:56.600 07:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:56.600 07:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 [2024-10-07 07:50:56.005960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.600 [2024-10-07 07:50:56.021342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:29:56.600 07:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:56.600 07:50:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:56.600 [2024-10-07 07:50:56.023433] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:57.536 "name": "raid_bdev1", 00:29:57.536 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:57.536 "strip_size_kb": 0, 00:29:57.536 "state": "online", 00:29:57.536 "raid_level": "raid1", 00:29:57.536 "superblock": true, 00:29:57.536 "num_base_bdevs": 2, 00:29:57.536 "num_base_bdevs_discovered": 2, 00:29:57.536 "num_base_bdevs_operational": 2, 00:29:57.536 "process": { 00:29:57.536 "type": "rebuild", 00:29:57.536 "target": "spare", 00:29:57.536 "progress": { 00:29:57.536 "blocks": 2560, 00:29:57.536 "percent": 32 00:29:57.536 } 00:29:57.536 }, 00:29:57.536 "base_bdevs_list": [ 00:29:57.536 { 00:29:57.536 "name": "spare", 00:29:57.536 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:29:57.536 "is_configured": true, 00:29:57.536 "data_offset": 256, 00:29:57.536 "data_size": 7936 00:29:57.536 }, 00:29:57.536 { 00:29:57.536 "name": "BaseBdev2", 00:29:57.536 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:57.536 "is_configured": true, 00:29:57.536 "data_offset": 256, 00:29:57.536 "data_size": 7936 00:29:57.536 } 00:29:57.536 ] 00:29:57.536 }' 00:29:57.536 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:57.795 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:57.795 "name": "raid_bdev1", 00:29:57.795 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:57.795 "strip_size_kb": 0, 00:29:57.795 "state": "online", 00:29:57.795 "raid_level": "raid1", 00:29:57.795 "superblock": true, 00:29:57.795 "num_base_bdevs": 2, 00:29:57.795 "num_base_bdevs_discovered": 2, 00:29:57.795 "num_base_bdevs_operational": 2, 00:29:57.795 "process": { 00:29:57.795 "type": "rebuild", 00:29:57.795 "target": "spare", 00:29:57.795 "progress": { 00:29:57.795 "blocks": 2816, 00:29:57.795 "percent": 35 00:29:57.795 } 00:29:57.795 }, 00:29:57.795 "base_bdevs_list": [ 00:29:57.795 { 00:29:57.795 "name": "spare", 00:29:57.795 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:29:57.795 "is_configured": true, 00:29:57.795 "data_offset": 256, 00:29:57.795 "data_size": 7936 00:29:57.795 }, 00:29:57.795 { 00:29:57.795 "name": "BaseBdev2", 00:29:57.795 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:57.795 "is_configured": true, 00:29:57.795 "data_offset": 256, 00:29:57.795 "data_size": 7936 00:29:57.795 } 00:29:57.795 ] 00:29:57.795 }' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.795 07:50:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:29:59.191 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:59.191 "name": "raid_bdev1", 00:29:59.191 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:29:59.191 "strip_size_kb": 0, 00:29:59.191 "state": "online", 00:29:59.191 "raid_level": "raid1", 00:29:59.191 "superblock": true, 00:29:59.191 "num_base_bdevs": 2, 00:29:59.191 "num_base_bdevs_discovered": 2, 00:29:59.192 "num_base_bdevs_operational": 2, 00:29:59.192 "process": { 00:29:59.192 "type": "rebuild", 00:29:59.192 "target": "spare", 00:29:59.192 "progress": { 00:29:59.192 "blocks": 5632, 00:29:59.192 "percent": 70 00:29:59.192 } 00:29:59.192 }, 00:29:59.192 "base_bdevs_list": [ 00:29:59.192 { 00:29:59.192 "name": "spare", 00:29:59.192 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:29:59.192 "is_configured": true, 00:29:59.192 "data_offset": 256, 00:29:59.192 "data_size": 7936 00:29:59.192 }, 00:29:59.192 { 00:29:59.192 "name": "BaseBdev2", 00:29:59.192 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:29:59.192 "is_configured": true, 00:29:59.192 "data_offset": 256, 00:29:59.192 "data_size": 7936 00:29:59.192 } 00:29:59.192 ] 00:29:59.192 }' 00:29:59.192 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:59.192 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.192 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:59.192 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.192 07:50:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:59.844 [2024-10-07 07:50:59.144744] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:59.844 [2024-10-07 07:50:59.144877] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:59.844 [2024-10-07 07:50:59.145078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:00.104 "name": "raid_bdev1", 00:30:00.104 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:00.104 "strip_size_kb": 0, 00:30:00.104 "state": "online", 00:30:00.104 "raid_level": "raid1", 00:30:00.104 "superblock": true, 00:30:00.104 "num_base_bdevs": 2, 00:30:00.104 "num_base_bdevs_discovered": 2, 00:30:00.104 "num_base_bdevs_operational": 2, 00:30:00.104 "base_bdevs_list": [ 00:30:00.104 { 00:30:00.104 "name": "spare", 00:30:00.104 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:00.104 "is_configured": true, 00:30:00.104 "data_offset": 256, 00:30:00.104 "data_size": 7936 00:30:00.104 }, 00:30:00.104 { 00:30:00.104 "name": "BaseBdev2", 00:30:00.104 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:00.104 "is_configured": true, 00:30:00.104 "data_offset": 256, 00:30:00.104 "data_size": 7936 00:30:00.104 } 00:30:00.104 ] 00:30:00.104 }' 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.104 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:00.104 "name": "raid_bdev1", 00:30:00.104 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:00.104 "strip_size_kb": 0, 00:30:00.104 "state": "online", 00:30:00.104 "raid_level": "raid1", 00:30:00.104 "superblock": true, 00:30:00.104 "num_base_bdevs": 2, 00:30:00.104 "num_base_bdevs_discovered": 2, 00:30:00.104 "num_base_bdevs_operational": 2, 00:30:00.104 "base_bdevs_list": [ 00:30:00.104 { 00:30:00.104 "name": "spare", 00:30:00.104 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:00.104 "is_configured": true, 00:30:00.104 "data_offset": 256, 00:30:00.104 "data_size": 7936 00:30:00.104 }, 00:30:00.104 { 00:30:00.104 "name": "BaseBdev2", 00:30:00.104 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:00.104 "is_configured": true, 00:30:00.104 "data_offset": 256, 00:30:00.104 "data_size": 7936 00:30:00.104 } 00:30:00.104 ] 00:30:00.104 }' 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.363 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.364 "name": "raid_bdev1", 00:30:00.364 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:00.364 "strip_size_kb": 0, 00:30:00.364 "state": "online", 00:30:00.364 "raid_level": "raid1", 00:30:00.364 "superblock": true, 00:30:00.364 "num_base_bdevs": 2, 00:30:00.364 "num_base_bdevs_discovered": 2, 00:30:00.364 "num_base_bdevs_operational": 2, 00:30:00.364 "base_bdevs_list": [ 00:30:00.364 { 00:30:00.364 "name": "spare", 00:30:00.364 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:00.364 "is_configured": true, 00:30:00.364 "data_offset": 256, 00:30:00.364 "data_size": 7936 00:30:00.364 }, 00:30:00.364 { 00:30:00.364 "name": "BaseBdev2", 00:30:00.364 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:00.364 "is_configured": true, 00:30:00.364 "data_offset": 256, 00:30:00.364 "data_size": 7936 00:30:00.364 } 00:30:00.364 ] 00:30:00.364 }' 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.364 07:50:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.622 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:00.622 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.622 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.622 [2024-10-07 07:51:00.180048] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:00.622 [2024-10-07 07:51:00.180100] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:00.622 [2024-10-07 07:51:00.180269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:00.622 [2024-10-07 07:51:00.180352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:00.622 [2024-10-07 07:51:00.180371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:00.882 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:01.141 /dev/nbd0 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local i 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # break 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.141 1+0 records in 00:30:01.141 1+0 records out 00:30:01.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362766 s, 11.3 MB/s 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # size=4096 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # return 0 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.141 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:01.401 /dev/nbd1 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local nbd_name=nbd1 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local i 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # grep -q -w nbd1 /proc/partitions 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # break 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.401 1+0 records in 00:30:01.401 1+0 records out 00:30:01.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040348 s, 10.2 MB/s 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # size=4096 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # return 0 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.401 07:51:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:01.660 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.661 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.919 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.177 [2024-10-07 07:51:01.630343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:02.177 [2024-10-07 07:51:01.630410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.177 [2024-10-07 07:51:01.630437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:02.177 [2024-10-07 07:51:01.630450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.177 [2024-10-07 07:51:01.632990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.177 [2024-10-07 07:51:01.633032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:02.177 [2024-10-07 07:51:01.633106] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:02.177 [2024-10-07 07:51:01.633179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.177 [2024-10-07 07:51:01.633350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:02.177 spare 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.177 [2024-10-07 07:51:01.733476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:02.177 [2024-10-07 07:51:01.733563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:02.177 [2024-10-07 07:51:01.733719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:30:02.177 [2024-10-07 07:51:01.733906] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:02.177 [2024-10-07 07:51:01.733917] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:02.177 [2024-10-07 07:51:01.734057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:02.177 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.436 "name": "raid_bdev1", 00:30:02.436 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:02.436 "strip_size_kb": 0, 00:30:02.436 "state": "online", 00:30:02.436 "raid_level": "raid1", 00:30:02.436 "superblock": true, 00:30:02.436 "num_base_bdevs": 2, 00:30:02.436 "num_base_bdevs_discovered": 2, 00:30:02.436 "num_base_bdevs_operational": 2, 00:30:02.436 "base_bdevs_list": [ 00:30:02.436 { 00:30:02.436 "name": "spare", 00:30:02.436 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:02.436 "is_configured": true, 00:30:02.436 "data_offset": 256, 00:30:02.436 "data_size": 7936 00:30:02.436 }, 00:30:02.436 { 00:30:02.436 "name": "BaseBdev2", 00:30:02.436 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:02.436 "is_configured": true, 00:30:02.436 "data_offset": 256, 00:30:02.436 "data_size": 7936 00:30:02.436 } 00:30:02.436 ] 00:30:02.436 }' 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.436 07:51:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:02.695 "name": "raid_bdev1", 00:30:02.695 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:02.695 "strip_size_kb": 0, 00:30:02.695 "state": "online", 00:30:02.695 "raid_level": "raid1", 00:30:02.695 "superblock": true, 00:30:02.695 "num_base_bdevs": 2, 00:30:02.695 "num_base_bdevs_discovered": 2, 00:30:02.695 "num_base_bdevs_operational": 2, 00:30:02.695 "base_bdevs_list": [ 00:30:02.695 { 00:30:02.695 "name": "spare", 00:30:02.695 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:02.695 "is_configured": true, 00:30:02.695 "data_offset": 256, 00:30:02.695 "data_size": 7936 00:30:02.695 }, 00:30:02.695 { 00:30:02.695 "name": "BaseBdev2", 00:30:02.695 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:02.695 "is_configured": true, 00:30:02.695 "data_offset": 256, 00:30:02.695 "data_size": 7936 00:30:02.695 } 00:30:02.695 ] 00:30:02.695 }' 00:30:02.695 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.954 [2024-10-07 07:51:02.378568] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:02.954 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.954 "name": "raid_bdev1", 00:30:02.954 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:02.954 "strip_size_kb": 0, 00:30:02.954 "state": "online", 00:30:02.954 "raid_level": "raid1", 00:30:02.954 "superblock": true, 00:30:02.954 "num_base_bdevs": 2, 00:30:02.954 "num_base_bdevs_discovered": 1, 00:30:02.954 "num_base_bdevs_operational": 1, 00:30:02.954 "base_bdevs_list": [ 00:30:02.954 { 00:30:02.954 "name": null, 00:30:02.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.954 "is_configured": false, 00:30:02.954 "data_offset": 0, 00:30:02.954 "data_size": 7936 00:30:02.954 }, 00:30:02.954 { 00:30:02.954 "name": "BaseBdev2", 00:30:02.954 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:02.954 "is_configured": true, 00:30:02.954 "data_offset": 256, 00:30:02.954 "data_size": 7936 00:30:02.955 } 00:30:02.955 ] 00:30:02.955 }' 00:30:02.955 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.955 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:03.522 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:03.522 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:03.522 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:03.522 [2024-10-07 07:51:02.818666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:03.522 [2024-10-07 07:51:02.818885] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:03.522 [2024-10-07 07:51:02.818906] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:03.522 [2024-10-07 07:51:02.818948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:03.522 [2024-10-07 07:51:02.834093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:30:03.522 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:03.522 07:51:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:03.522 [2024-10-07 07:51:02.836204] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.466 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:04.467 "name": "raid_bdev1", 00:30:04.467 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:04.467 "strip_size_kb": 0, 00:30:04.467 "state": "online", 00:30:04.467 "raid_level": "raid1", 00:30:04.467 "superblock": true, 00:30:04.467 "num_base_bdevs": 2, 00:30:04.467 "num_base_bdevs_discovered": 2, 00:30:04.467 "num_base_bdevs_operational": 2, 00:30:04.467 "process": { 00:30:04.467 "type": "rebuild", 00:30:04.467 "target": "spare", 00:30:04.467 "progress": { 00:30:04.467 "blocks": 2560, 00:30:04.467 "percent": 32 00:30:04.467 } 00:30:04.467 }, 00:30:04.467 "base_bdevs_list": [ 00:30:04.467 { 00:30:04.467 "name": "spare", 00:30:04.467 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:04.467 "is_configured": true, 00:30:04.467 "data_offset": 256, 00:30:04.467 "data_size": 7936 00:30:04.467 }, 00:30:04.467 { 00:30:04.467 "name": "BaseBdev2", 00:30:04.467 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:04.467 "is_configured": true, 00:30:04.467 "data_offset": 256, 00:30:04.467 "data_size": 7936 00:30:04.467 } 00:30:04.467 ] 00:30:04.467 }' 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:04.467 07:51:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:04.467 [2024-10-07 07:51:03.990039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.724 [2024-10-07 07:51:04.044406] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:04.724 [2024-10-07 07:51:04.044520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:04.724 [2024-10-07 07:51:04.044539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.724 [2024-10-07 07:51:04.044552] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:04.724 "name": "raid_bdev1", 00:30:04.724 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:04.724 "strip_size_kb": 0, 00:30:04.724 "state": "online", 00:30:04.724 "raid_level": "raid1", 00:30:04.724 "superblock": true, 00:30:04.724 "num_base_bdevs": 2, 00:30:04.724 "num_base_bdevs_discovered": 1, 00:30:04.724 "num_base_bdevs_operational": 1, 00:30:04.724 "base_bdevs_list": [ 00:30:04.724 { 00:30:04.724 "name": null, 00:30:04.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.724 "is_configured": false, 00:30:04.724 "data_offset": 0, 00:30:04.724 "data_size": 7936 00:30:04.724 }, 00:30:04.724 { 00:30:04.724 "name": "BaseBdev2", 00:30:04.724 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:04.724 "is_configured": true, 00:30:04.724 "data_offset": 256, 00:30:04.724 "data_size": 7936 00:30:04.724 } 00:30:04.724 ] 00:30:04.724 }' 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:04.724 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:04.981 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:04.981 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:05.239 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:05.240 [2024-10-07 07:51:04.547764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:05.240 [2024-10-07 07:51:04.547835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:05.240 [2024-10-07 07:51:04.547883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:05.240 [2024-10-07 07:51:04.547904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:05.240 [2024-10-07 07:51:04.548208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:05.240 [2024-10-07 07:51:04.548231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:05.240 [2024-10-07 07:51:04.548302] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:05.240 [2024-10-07 07:51:04.548319] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:05.240 [2024-10-07 07:51:04.548336] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:05.240 [2024-10-07 07:51:04.548362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.240 [2024-10-07 07:51:04.564634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:30:05.240 spare 00:30:05.240 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:05.240 07:51:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:05.240 [2024-10-07 07:51:04.566977] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:06.175 "name": "raid_bdev1", 00:30:06.175 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:06.175 "strip_size_kb": 0, 00:30:06.175 "state": "online", 00:30:06.175 "raid_level": "raid1", 00:30:06.175 "superblock": true, 00:30:06.175 "num_base_bdevs": 2, 00:30:06.175 "num_base_bdevs_discovered": 2, 00:30:06.175 "num_base_bdevs_operational": 2, 00:30:06.175 "process": { 00:30:06.175 "type": "rebuild", 00:30:06.175 "target": "spare", 00:30:06.175 "progress": { 00:30:06.175 "blocks": 2560, 00:30:06.175 "percent": 32 00:30:06.175 } 00:30:06.175 }, 00:30:06.175 "base_bdevs_list": [ 00:30:06.175 { 00:30:06.175 "name": "spare", 00:30:06.175 "uuid": "51616145-3d23-55e8-a4a3-be52b4d98eba", 00:30:06.175 "is_configured": true, 00:30:06.175 "data_offset": 256, 00:30:06.175 "data_size": 7936 00:30:06.175 }, 00:30:06.175 { 00:30:06.175 "name": "BaseBdev2", 00:30:06.175 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:06.175 "is_configured": true, 00:30:06.175 "data_offset": 256, 00:30:06.175 "data_size": 7936 00:30:06.175 } 00:30:06.175 ] 00:30:06.175 }' 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.175 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.175 [2024-10-07 07:51:05.720605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.433 [2024-10-07 07:51:05.775036] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:06.433 [2024-10-07 07:51:05.775119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.433 [2024-10-07 07:51:05.775141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.433 [2024-10-07 07:51:05.775152] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:06.433 "name": "raid_bdev1", 00:30:06.433 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:06.433 "strip_size_kb": 0, 00:30:06.433 "state": "online", 00:30:06.433 "raid_level": "raid1", 00:30:06.433 "superblock": true, 00:30:06.433 "num_base_bdevs": 2, 00:30:06.433 "num_base_bdevs_discovered": 1, 00:30:06.433 "num_base_bdevs_operational": 1, 00:30:06.433 "base_bdevs_list": [ 00:30:06.433 { 00:30:06.433 "name": null, 00:30:06.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.433 "is_configured": false, 00:30:06.433 "data_offset": 0, 00:30:06.433 "data_size": 7936 00:30:06.433 }, 00:30:06.433 { 00:30:06.433 "name": "BaseBdev2", 00:30:06.433 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:06.433 "is_configured": true, 00:30:06.433 "data_offset": 256, 00:30:06.433 "data_size": 7936 00:30:06.433 } 00:30:06.433 ] 00:30:06.433 }' 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:06.433 07:51:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.691 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.692 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:06.950 "name": "raid_bdev1", 00:30:06.950 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:06.950 "strip_size_kb": 0, 00:30:06.950 "state": "online", 00:30:06.950 "raid_level": "raid1", 00:30:06.950 "superblock": true, 00:30:06.950 "num_base_bdevs": 2, 00:30:06.950 "num_base_bdevs_discovered": 1, 00:30:06.950 "num_base_bdevs_operational": 1, 00:30:06.950 "base_bdevs_list": [ 00:30:06.950 { 00:30:06.950 "name": null, 00:30:06.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.950 "is_configured": false, 00:30:06.950 "data_offset": 0, 00:30:06.950 "data_size": 7936 00:30:06.950 }, 00:30:06.950 { 00:30:06.950 "name": "BaseBdev2", 00:30:06.950 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:06.950 "is_configured": true, 00:30:06.950 "data_offset": 256, 00:30:06.950 "data_size": 7936 00:30:06.950 } 00:30:06.950 ] 00:30:06.950 }' 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:06.950 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:06.951 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:06.951 [2024-10-07 07:51:06.397213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:06.951 [2024-10-07 07:51:06.397282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.951 [2024-10-07 07:51:06.397311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:06.951 [2024-10-07 07:51:06.397324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.951 [2024-10-07 07:51:06.397562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.951 [2024-10-07 07:51:06.397578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:06.951 [2024-10-07 07:51:06.397640] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:06.951 [2024-10-07 07:51:06.397655] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:06.951 [2024-10-07 07:51:06.397668] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:06.951 [2024-10-07 07:51:06.397680] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:06.951 BaseBdev1 00:30:06.951 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:06.951 07:51:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.885 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.886 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:07.886 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:07.886 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.886 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:08.144 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:08.144 "name": "raid_bdev1", 00:30:08.144 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:08.144 "strip_size_kb": 0, 00:30:08.144 "state": "online", 00:30:08.144 "raid_level": "raid1", 00:30:08.144 "superblock": true, 00:30:08.144 "num_base_bdevs": 2, 00:30:08.144 "num_base_bdevs_discovered": 1, 00:30:08.144 "num_base_bdevs_operational": 1, 00:30:08.144 "base_bdevs_list": [ 00:30:08.144 { 00:30:08.144 "name": null, 00:30:08.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.144 "is_configured": false, 00:30:08.144 "data_offset": 0, 00:30:08.144 "data_size": 7936 00:30:08.144 }, 00:30:08.144 { 00:30:08.144 "name": "BaseBdev2", 00:30:08.144 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:08.144 "is_configured": true, 00:30:08.144 "data_offset": 256, 00:30:08.144 "data_size": 7936 00:30:08.144 } 00:30:08.144 ] 00:30:08.144 }' 00:30:08.144 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:08.144 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:08.403 "name": "raid_bdev1", 00:30:08.403 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:08.403 "strip_size_kb": 0, 00:30:08.403 "state": "online", 00:30:08.403 "raid_level": "raid1", 00:30:08.403 "superblock": true, 00:30:08.403 "num_base_bdevs": 2, 00:30:08.403 "num_base_bdevs_discovered": 1, 00:30:08.403 "num_base_bdevs_operational": 1, 00:30:08.403 "base_bdevs_list": [ 00:30:08.403 { 00:30:08.403 "name": null, 00:30:08.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.403 "is_configured": false, 00:30:08.403 "data_offset": 0, 00:30:08.403 "data_size": 7936 00:30:08.403 }, 00:30:08.403 { 00:30:08.403 "name": "BaseBdev2", 00:30:08.403 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:08.403 "is_configured": true, 00:30:08.403 "data_offset": 256, 00:30:08.403 "data_size": 7936 00:30:08.403 } 00:30:08.403 ] 00:30:08.403 }' 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:08.403 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # local es=0 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:08.662 [2024-10-07 07:51:07.973656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:08.662 [2024-10-07 07:51:07.973844] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:08.662 [2024-10-07 07:51:07.973864] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:08.662 request: 00:30:08.662 { 00:30:08.662 "base_bdev": "BaseBdev1", 00:30:08.662 "raid_bdev": "raid_bdev1", 00:30:08.662 "method": "bdev_raid_add_base_bdev", 00:30:08.662 "req_id": 1 00:30:08.662 } 00:30:08.662 Got JSON-RPC error response 00:30:08.662 response: 00:30:08.662 { 00:30:08.662 "code": -22, 00:30:08.662 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:08.662 } 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@656 -- # es=1 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:30:08.662 07:51:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.713 07:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:09.713 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:09.713 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.713 "name": "raid_bdev1", 00:30:09.713 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:09.713 "strip_size_kb": 0, 00:30:09.713 "state": "online", 00:30:09.713 "raid_level": "raid1", 00:30:09.713 "superblock": true, 00:30:09.713 "num_base_bdevs": 2, 00:30:09.713 "num_base_bdevs_discovered": 1, 00:30:09.713 "num_base_bdevs_operational": 1, 00:30:09.713 "base_bdevs_list": [ 00:30:09.713 { 00:30:09.713 "name": null, 00:30:09.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.713 "is_configured": false, 00:30:09.713 "data_offset": 0, 00:30:09.713 "data_size": 7936 00:30:09.713 }, 00:30:09.713 { 00:30:09.713 "name": "BaseBdev2", 00:30:09.713 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:09.713 "is_configured": true, 00:30:09.713 "data_offset": 256, 00:30:09.713 "data_size": 7936 00:30:09.713 } 00:30:09.713 ] 00:30:09.713 }' 00:30:09.713 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.713 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.971 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.972 "name": "raid_bdev1", 00:30:09.972 "uuid": "d06dd2fc-d7aa-4a85-83ed-49280ae917a3", 00:30:09.972 "strip_size_kb": 0, 00:30:09.972 "state": "online", 00:30:09.972 "raid_level": "raid1", 00:30:09.972 "superblock": true, 00:30:09.972 "num_base_bdevs": 2, 00:30:09.972 "num_base_bdevs_discovered": 1, 00:30:09.972 "num_base_bdevs_operational": 1, 00:30:09.972 "base_bdevs_list": [ 00:30:09.972 { 00:30:09.972 "name": null, 00:30:09.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.972 "is_configured": false, 00:30:09.972 "data_offset": 0, 00:30:09.972 "data_size": 7936 00:30:09.972 }, 00:30:09.972 { 00:30:09.972 "name": "BaseBdev2", 00:30:09.972 "uuid": "8d7aa0bc-655b-5fcc-8193-d0405c77df4b", 00:30:09.972 "is_configured": true, 00:30:09.972 "data_offset": 256, 00:30:09.972 "data_size": 7936 00:30:09.972 } 00:30:09.972 ] 00:30:09.972 }' 00:30:09.972 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88046 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' -z 88046 ']' 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # kill -0 88046 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # uname 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 88046 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:10.231 killing process with pid 88046 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # echo 'killing process with pid 88046' 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # kill 88046 00:30:10.231 Received shutdown signal, test time was about 60.000000 seconds 00:30:10.231 00:30:10.231 Latency(us) 00:30:10.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.231 =================================================================================================================== 00:30:10.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:10.231 [2024-10-07 07:51:09.621698] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:10.231 07:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@977 -- # wait 88046 00:30:10.231 [2024-10-07 07:51:09.621873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.231 [2024-10-07 07:51:09.621946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:10.231 [2024-10-07 07:51:09.621962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:10.491 [2024-10-07 07:51:09.957461] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:11.868 07:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:30:11.868 00:30:11.868 real 0m20.447s 00:30:11.868 user 0m26.610s 00:30:11.868 sys 0m2.776s 00:30:11.868 07:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:11.868 ************************************ 00:30:11.868 END TEST raid_rebuild_test_sb_md_separate 00:30:11.868 07:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:30:11.868 ************************************ 00:30:11.868 07:51:11 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:30:11.868 07:51:11 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:30:11.868 07:51:11 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:30:11.868 07:51:11 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:11.868 07:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:11.868 ************************************ 00:30:11.868 START TEST raid_state_function_test_sb_md_interleaved 00:30:11.868 ************************************ 00:30:11.868 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # raid_state_function_test raid1 2 true 00:30:11.868 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88740 00:30:11.869 Process raid pid: 88740 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88740' 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88740 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # '[' -z 88740 ']' 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:11.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:11.869 07:51:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:12.128 [2024-10-07 07:51:11.459385] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:12.128 [2024-10-07 07:51:11.459568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.128 [2024-10-07 07:51:11.642088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.393 [2024-10-07 07:51:11.862508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.655 [2024-10-07 07:51:12.085720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:12.655 [2024-10-07 07:51:12.085766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@867 -- # return 0 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:12.914 [2024-10-07 07:51:12.366785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:12.914 [2024-10-07 07:51:12.366857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:12.914 [2024-10-07 07:51:12.366869] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:12.914 [2024-10-07 07:51:12.366886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.914 "name": "Existed_Raid", 00:30:12.914 "uuid": "c74e605f-260b-43d6-8dfa-842c595f716d", 00:30:12.914 "strip_size_kb": 0, 00:30:12.914 "state": "configuring", 00:30:12.914 "raid_level": "raid1", 00:30:12.914 "superblock": true, 00:30:12.914 "num_base_bdevs": 2, 00:30:12.914 "num_base_bdevs_discovered": 0, 00:30:12.914 "num_base_bdevs_operational": 2, 00:30:12.914 "base_bdevs_list": [ 00:30:12.914 { 00:30:12.914 "name": "BaseBdev1", 00:30:12.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.914 "is_configured": false, 00:30:12.914 "data_offset": 0, 00:30:12.914 "data_size": 0 00:30:12.914 }, 00:30:12.914 { 00:30:12.914 "name": "BaseBdev2", 00:30:12.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.914 "is_configured": false, 00:30:12.914 "data_offset": 0, 00:30:12.914 "data_size": 0 00:30:12.914 } 00:30:12.914 ] 00:30:12.914 }' 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.914 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.482 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 [2024-10-07 07:51:12.770793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:13.483 [2024-10-07 07:51:12.770832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 [2024-10-07 07:51:12.778812] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:13.483 [2024-10-07 07:51:12.778857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:13.483 [2024-10-07 07:51:12.778867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:13.483 [2024-10-07 07:51:12.778883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 [2024-10-07 07:51:12.834759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:13.483 BaseBdev1 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev1 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local i 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 [ 00:30:13.483 { 00:30:13.483 "name": "BaseBdev1", 00:30:13.483 "aliases": [ 00:30:13.483 "18f8e7a5-6a17-47da-873a-6cef99965b9b" 00:30:13.483 ], 00:30:13.483 "product_name": "Malloc disk", 00:30:13.483 "block_size": 4128, 00:30:13.483 "num_blocks": 8192, 00:30:13.483 "uuid": "18f8e7a5-6a17-47da-873a-6cef99965b9b", 00:30:13.483 "md_size": 32, 00:30:13.483 "md_interleave": true, 00:30:13.483 "dif_type": 0, 00:30:13.483 "assigned_rate_limits": { 00:30:13.483 "rw_ios_per_sec": 0, 00:30:13.483 "rw_mbytes_per_sec": 0, 00:30:13.483 "r_mbytes_per_sec": 0, 00:30:13.483 "w_mbytes_per_sec": 0 00:30:13.483 }, 00:30:13.483 "claimed": true, 00:30:13.483 "claim_type": "exclusive_write", 00:30:13.483 "zoned": false, 00:30:13.483 "supported_io_types": { 00:30:13.483 "read": true, 00:30:13.483 "write": true, 00:30:13.483 "unmap": true, 00:30:13.483 "flush": true, 00:30:13.483 "reset": true, 00:30:13.483 "nvme_admin": false, 00:30:13.483 "nvme_io": false, 00:30:13.483 "nvme_io_md": false, 00:30:13.483 "write_zeroes": true, 00:30:13.483 "zcopy": true, 00:30:13.483 "get_zone_info": false, 00:30:13.483 "zone_management": false, 00:30:13.483 "zone_append": false, 00:30:13.483 "compare": false, 00:30:13.483 "compare_and_write": false, 00:30:13.483 "abort": true, 00:30:13.483 "seek_hole": false, 00:30:13.483 "seek_data": false, 00:30:13.483 "copy": true, 00:30:13.483 "nvme_iov_md": false 00:30:13.483 }, 00:30:13.483 "memory_domains": [ 00:30:13.483 { 00:30:13.483 "dma_device_id": "system", 00:30:13.483 "dma_device_type": 1 00:30:13.483 }, 00:30:13.483 { 00:30:13.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.483 "dma_device_type": 2 00:30:13.483 } 00:30:13.483 ], 00:30:13.483 "driver_specific": {} 00:30:13.483 } 00:30:13.483 ] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # return 0 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.483 "name": "Existed_Raid", 00:30:13.483 "uuid": "7e3cb8c6-4c30-4b94-b31a-a45ffecf244e", 00:30:13.483 "strip_size_kb": 0, 00:30:13.483 "state": "configuring", 00:30:13.483 "raid_level": "raid1", 00:30:13.483 "superblock": true, 00:30:13.483 "num_base_bdevs": 2, 00:30:13.483 "num_base_bdevs_discovered": 1, 00:30:13.483 "num_base_bdevs_operational": 2, 00:30:13.483 "base_bdevs_list": [ 00:30:13.483 { 00:30:13.483 "name": "BaseBdev1", 00:30:13.483 "uuid": "18f8e7a5-6a17-47da-873a-6cef99965b9b", 00:30:13.483 "is_configured": true, 00:30:13.483 "data_offset": 256, 00:30:13.483 "data_size": 7936 00:30:13.483 }, 00:30:13.483 { 00:30:13.483 "name": "BaseBdev2", 00:30:13.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.483 "is_configured": false, 00:30:13.483 "data_offset": 0, 00:30:13.483 "data_size": 0 00:30:13.483 } 00:30:13.483 ] 00:30:13.483 }' 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.483 07:51:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.053 [2024-10-07 07:51:13.310962] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:14.053 [2024-10-07 07:51:13.311141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.053 [2024-10-07 07:51:13.319028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:14.053 [2024-10-07 07:51:13.321524] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:14.053 [2024-10-07 07:51:13.321720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.053 "name": "Existed_Raid", 00:30:14.053 "uuid": "31a75045-2195-4ad1-8745-925d53e42b47", 00:30:14.053 "strip_size_kb": 0, 00:30:14.053 "state": "configuring", 00:30:14.053 "raid_level": "raid1", 00:30:14.053 "superblock": true, 00:30:14.053 "num_base_bdevs": 2, 00:30:14.053 "num_base_bdevs_discovered": 1, 00:30:14.053 "num_base_bdevs_operational": 2, 00:30:14.053 "base_bdevs_list": [ 00:30:14.053 { 00:30:14.053 "name": "BaseBdev1", 00:30:14.053 "uuid": "18f8e7a5-6a17-47da-873a-6cef99965b9b", 00:30:14.053 "is_configured": true, 00:30:14.053 "data_offset": 256, 00:30:14.053 "data_size": 7936 00:30:14.053 }, 00:30:14.053 { 00:30:14.053 "name": "BaseBdev2", 00:30:14.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.053 "is_configured": false, 00:30:14.053 "data_offset": 0, 00:30:14.053 "data_size": 0 00:30:14.053 } 00:30:14.053 ] 00:30:14.053 }' 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.053 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.312 [2024-10-07 07:51:13.770695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:14.312 BaseBdev2 00:30:14.312 [2024-10-07 07:51:13.771140] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:14.312 [2024-10-07 07:51:13.771161] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:14.312 [2024-10-07 07:51:13.771254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:14.312 [2024-10-07 07:51:13.771326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:14.312 [2024-10-07 07:51:13.771338] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:14.312 [2024-10-07 07:51:13.771406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_name=BaseBdev2 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_timeout= 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local i 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # [[ -z '' ]] 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # bdev_timeout=2000 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # rpc_cmd bdev_wait_for_examine 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.312 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.312 [ 00:30:14.312 { 00:30:14.312 "name": "BaseBdev2", 00:30:14.312 "aliases": [ 00:30:14.312 "99afddf1-9842-4555-891b-b7a027a34832" 00:30:14.312 ], 00:30:14.312 "product_name": "Malloc disk", 00:30:14.312 "block_size": 4128, 00:30:14.312 "num_blocks": 8192, 00:30:14.312 "uuid": "99afddf1-9842-4555-891b-b7a027a34832", 00:30:14.312 "md_size": 32, 00:30:14.312 "md_interleave": true, 00:30:14.312 "dif_type": 0, 00:30:14.312 "assigned_rate_limits": { 00:30:14.312 "rw_ios_per_sec": 0, 00:30:14.312 "rw_mbytes_per_sec": 0, 00:30:14.312 "r_mbytes_per_sec": 0, 00:30:14.312 "w_mbytes_per_sec": 0 00:30:14.312 }, 00:30:14.312 "claimed": true, 00:30:14.313 "claim_type": "exclusive_write", 00:30:14.313 "zoned": false, 00:30:14.313 "supported_io_types": { 00:30:14.313 "read": true, 00:30:14.313 "write": true, 00:30:14.313 "unmap": true, 00:30:14.313 "flush": true, 00:30:14.313 "reset": true, 00:30:14.313 "nvme_admin": false, 00:30:14.313 "nvme_io": false, 00:30:14.313 "nvme_io_md": false, 00:30:14.313 "write_zeroes": true, 00:30:14.313 "zcopy": true, 00:30:14.313 "get_zone_info": false, 00:30:14.313 "zone_management": false, 00:30:14.313 "zone_append": false, 00:30:14.313 "compare": false, 00:30:14.313 "compare_and_write": false, 00:30:14.313 "abort": true, 00:30:14.313 "seek_hole": false, 00:30:14.313 "seek_data": false, 00:30:14.313 "copy": true, 00:30:14.313 "nvme_iov_md": false 00:30:14.313 }, 00:30:14.313 "memory_domains": [ 00:30:14.313 { 00:30:14.313 "dma_device_id": "system", 00:30:14.313 "dma_device_type": 1 00:30:14.313 }, 00:30:14.313 { 00:30:14.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.313 "dma_device_type": 2 00:30:14.313 } 00:30:14.313 ], 00:30:14.313 "driver_specific": {} 00:30:14.313 } 00:30:14.313 ] 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # return 0 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.313 "name": "Existed_Raid", 00:30:14.313 "uuid": "31a75045-2195-4ad1-8745-925d53e42b47", 00:30:14.313 "strip_size_kb": 0, 00:30:14.313 "state": "online", 00:30:14.313 "raid_level": "raid1", 00:30:14.313 "superblock": true, 00:30:14.313 "num_base_bdevs": 2, 00:30:14.313 "num_base_bdevs_discovered": 2, 00:30:14.313 "num_base_bdevs_operational": 2, 00:30:14.313 "base_bdevs_list": [ 00:30:14.313 { 00:30:14.313 "name": "BaseBdev1", 00:30:14.313 "uuid": "18f8e7a5-6a17-47da-873a-6cef99965b9b", 00:30:14.313 "is_configured": true, 00:30:14.313 "data_offset": 256, 00:30:14.313 "data_size": 7936 00:30:14.313 }, 00:30:14.313 { 00:30:14.313 "name": "BaseBdev2", 00:30:14.313 "uuid": "99afddf1-9842-4555-891b-b7a027a34832", 00:30:14.313 "is_configured": true, 00:30:14.313 "data_offset": 256, 00:30:14.313 "data_size": 7936 00:30:14.313 } 00:30:14.313 ] 00:30:14.313 }' 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.313 07:51:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.883 [2024-10-07 07:51:14.255210] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.883 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:14.883 "name": "Existed_Raid", 00:30:14.883 "aliases": [ 00:30:14.883 "31a75045-2195-4ad1-8745-925d53e42b47" 00:30:14.883 ], 00:30:14.883 "product_name": "Raid Volume", 00:30:14.883 "block_size": 4128, 00:30:14.883 "num_blocks": 7936, 00:30:14.883 "uuid": "31a75045-2195-4ad1-8745-925d53e42b47", 00:30:14.883 "md_size": 32, 00:30:14.883 "md_interleave": true, 00:30:14.883 "dif_type": 0, 00:30:14.883 "assigned_rate_limits": { 00:30:14.883 "rw_ios_per_sec": 0, 00:30:14.883 "rw_mbytes_per_sec": 0, 00:30:14.883 "r_mbytes_per_sec": 0, 00:30:14.883 "w_mbytes_per_sec": 0 00:30:14.883 }, 00:30:14.883 "claimed": false, 00:30:14.883 "zoned": false, 00:30:14.883 "supported_io_types": { 00:30:14.883 "read": true, 00:30:14.883 "write": true, 00:30:14.883 "unmap": false, 00:30:14.883 "flush": false, 00:30:14.883 "reset": true, 00:30:14.883 "nvme_admin": false, 00:30:14.883 "nvme_io": false, 00:30:14.883 "nvme_io_md": false, 00:30:14.883 "write_zeroes": true, 00:30:14.883 "zcopy": false, 00:30:14.883 "get_zone_info": false, 00:30:14.883 "zone_management": false, 00:30:14.883 "zone_append": false, 00:30:14.883 "compare": false, 00:30:14.883 "compare_and_write": false, 00:30:14.883 "abort": false, 00:30:14.883 "seek_hole": false, 00:30:14.883 "seek_data": false, 00:30:14.883 "copy": false, 00:30:14.883 "nvme_iov_md": false 00:30:14.883 }, 00:30:14.883 "memory_domains": [ 00:30:14.883 { 00:30:14.883 "dma_device_id": "system", 00:30:14.883 "dma_device_type": 1 00:30:14.883 }, 00:30:14.883 { 00:30:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.883 "dma_device_type": 2 00:30:14.883 }, 00:30:14.883 { 00:30:14.883 "dma_device_id": "system", 00:30:14.883 "dma_device_type": 1 00:30:14.883 }, 00:30:14.883 { 00:30:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.883 "dma_device_type": 2 00:30:14.883 } 00:30:14.883 ], 00:30:14.883 "driver_specific": { 00:30:14.883 "raid": { 00:30:14.883 "uuid": "31a75045-2195-4ad1-8745-925d53e42b47", 00:30:14.883 "strip_size_kb": 0, 00:30:14.883 "state": "online", 00:30:14.883 "raid_level": "raid1", 00:30:14.883 "superblock": true, 00:30:14.883 "num_base_bdevs": 2, 00:30:14.884 "num_base_bdevs_discovered": 2, 00:30:14.884 "num_base_bdevs_operational": 2, 00:30:14.884 "base_bdevs_list": [ 00:30:14.884 { 00:30:14.884 "name": "BaseBdev1", 00:30:14.884 "uuid": "18f8e7a5-6a17-47da-873a-6cef99965b9b", 00:30:14.884 "is_configured": true, 00:30:14.884 "data_offset": 256, 00:30:14.884 "data_size": 7936 00:30:14.884 }, 00:30:14.884 { 00:30:14.884 "name": "BaseBdev2", 00:30:14.884 "uuid": "99afddf1-9842-4555-891b-b7a027a34832", 00:30:14.884 "is_configured": true, 00:30:14.884 "data_offset": 256, 00:30:14.884 "data_size": 7936 00:30:14.884 } 00:30:14.884 ] 00:30:14.884 } 00:30:14.884 } 00:30:14.884 }' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:14.884 BaseBdev2' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:14.884 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.144 [2024-10-07 07:51:14.474968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.144 "name": "Existed_Raid", 00:30:15.144 "uuid": "31a75045-2195-4ad1-8745-925d53e42b47", 00:30:15.144 "strip_size_kb": 0, 00:30:15.144 "state": "online", 00:30:15.144 "raid_level": "raid1", 00:30:15.144 "superblock": true, 00:30:15.144 "num_base_bdevs": 2, 00:30:15.144 "num_base_bdevs_discovered": 1, 00:30:15.144 "num_base_bdevs_operational": 1, 00:30:15.144 "base_bdevs_list": [ 00:30:15.144 { 00:30:15.144 "name": null, 00:30:15.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.144 "is_configured": false, 00:30:15.144 "data_offset": 0, 00:30:15.144 "data_size": 7936 00:30:15.144 }, 00:30:15.144 { 00:30:15.144 "name": "BaseBdev2", 00:30:15.144 "uuid": "99afddf1-9842-4555-891b-b7a027a34832", 00:30:15.144 "is_configured": true, 00:30:15.144 "data_offset": 256, 00:30:15.144 "data_size": 7936 00:30:15.144 } 00:30:15.144 ] 00:30:15.144 }' 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.144 07:51:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.712 [2024-10-07 07:51:15.075419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:15.712 [2024-10-07 07:51:15.075680] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:15.712 [2024-10-07 07:51:15.177343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.712 [2024-10-07 07:51:15.177529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.712 [2024-10-07 07:51:15.177664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.712 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88740 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' -z 88740 ']' 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # kill -0 88740 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # uname 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 88740 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:15.713 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:15.971 killing process with pid 88740 00:30:15.971 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # echo 'killing process with pid 88740' 00:30:15.971 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # kill 88740 00:30:15.971 [2024-10-07 07:51:15.272911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:15.971 07:51:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@977 -- # wait 88740 00:30:15.971 [2024-10-07 07:51:15.291242] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:17.349 07:51:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:30:17.349 00:30:17.349 real 0m5.272s 00:30:17.349 user 0m7.422s 00:30:17.349 sys 0m0.989s 00:30:17.349 07:51:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:17.349 ************************************ 00:30:17.349 END TEST raid_state_function_test_sb_md_interleaved 00:30:17.349 ************************************ 00:30:17.349 07:51:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:17.349 07:51:16 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:30:17.349 07:51:16 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 4 -le 1 ']' 00:30:17.349 07:51:16 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:17.349 07:51:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:17.349 ************************************ 00:30:17.349 START TEST raid_superblock_test_md_interleaved 00:30:17.349 ************************************ 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # raid_superblock_test raid1 2 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:30:17.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88992 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88992 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # '[' -z 88992 ']' 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:17.349 07:51:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:17.349 [2024-10-07 07:51:16.795439] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:17.349 [2024-10-07 07:51:16.795888] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88992 ] 00:30:17.607 [2024-10-07 07:51:16.979482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.866 [2024-10-07 07:51:17.197010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.866 [2024-10-07 07:51:17.415564] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.866 [2024-10-07 07:51:17.415593] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@867 -- # return 0 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.133 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 malloc1 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 [2024-10-07 07:51:17.704941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:18.393 [2024-10-07 07:51:17.705145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.393 [2024-10-07 07:51:17.705217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:18.393 [2024-10-07 07:51:17.705426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.393 [2024-10-07 07:51:17.707734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.393 [2024-10-07 07:51:17.707896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:18.393 pt1 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 malloc2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 [2024-10-07 07:51:17.776842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:18.393 [2024-10-07 07:51:17.777033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.393 [2024-10-07 07:51:17.777099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:18.393 [2024-10-07 07:51:17.777193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.393 [2024-10-07 07:51:17.779337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.393 [2024-10-07 07:51:17.779471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:18.393 pt2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 [2024-10-07 07:51:17.788927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:18.393 [2024-10-07 07:51:17.791254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:18.393 [2024-10-07 07:51:17.791544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:18.393 [2024-10-07 07:51:17.791642] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:18.393 [2024-10-07 07:51:17.791831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:18.393 [2024-10-07 07:51:17.791989] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:18.393 [2024-10-07 07:51:17.792011] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:18.393 [2024-10-07 07:51:17.792088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:18.393 "name": "raid_bdev1", 00:30:18.393 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:18.393 "strip_size_kb": 0, 00:30:18.393 "state": "online", 00:30:18.393 "raid_level": "raid1", 00:30:18.393 "superblock": true, 00:30:18.393 "num_base_bdevs": 2, 00:30:18.393 "num_base_bdevs_discovered": 2, 00:30:18.393 "num_base_bdevs_operational": 2, 00:30:18.393 "base_bdevs_list": [ 00:30:18.393 { 00:30:18.393 "name": "pt1", 00:30:18.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:18.393 "is_configured": true, 00:30:18.393 "data_offset": 256, 00:30:18.393 "data_size": 7936 00:30:18.393 }, 00:30:18.393 { 00:30:18.393 "name": "pt2", 00:30:18.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:18.393 "is_configured": true, 00:30:18.393 "data_offset": 256, 00:30:18.393 "data_size": 7936 00:30:18.393 } 00:30:18.393 ] 00:30:18.393 }' 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:18.393 07:51:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.652 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.912 [2024-10-07 07:51:18.213256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:18.912 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.912 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.912 "name": "raid_bdev1", 00:30:18.912 "aliases": [ 00:30:18.912 "70ad3a62-571f-48c0-bb15-3c2d62ba6093" 00:30:18.912 ], 00:30:18.912 "product_name": "Raid Volume", 00:30:18.912 "block_size": 4128, 00:30:18.912 "num_blocks": 7936, 00:30:18.912 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:18.912 "md_size": 32, 00:30:18.912 "md_interleave": true, 00:30:18.912 "dif_type": 0, 00:30:18.912 "assigned_rate_limits": { 00:30:18.912 "rw_ios_per_sec": 0, 00:30:18.912 "rw_mbytes_per_sec": 0, 00:30:18.912 "r_mbytes_per_sec": 0, 00:30:18.912 "w_mbytes_per_sec": 0 00:30:18.912 }, 00:30:18.912 "claimed": false, 00:30:18.912 "zoned": false, 00:30:18.912 "supported_io_types": { 00:30:18.912 "read": true, 00:30:18.912 "write": true, 00:30:18.912 "unmap": false, 00:30:18.912 "flush": false, 00:30:18.912 "reset": true, 00:30:18.912 "nvme_admin": false, 00:30:18.912 "nvme_io": false, 00:30:18.912 "nvme_io_md": false, 00:30:18.912 "write_zeroes": true, 00:30:18.912 "zcopy": false, 00:30:18.912 "get_zone_info": false, 00:30:18.912 "zone_management": false, 00:30:18.912 "zone_append": false, 00:30:18.912 "compare": false, 00:30:18.912 "compare_and_write": false, 00:30:18.912 "abort": false, 00:30:18.912 "seek_hole": false, 00:30:18.912 "seek_data": false, 00:30:18.912 "copy": false, 00:30:18.912 "nvme_iov_md": false 00:30:18.912 }, 00:30:18.912 "memory_domains": [ 00:30:18.912 { 00:30:18.912 "dma_device_id": "system", 00:30:18.912 "dma_device_type": 1 00:30:18.912 }, 00:30:18.912 { 00:30:18.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.912 "dma_device_type": 2 00:30:18.912 }, 00:30:18.912 { 00:30:18.912 "dma_device_id": "system", 00:30:18.912 "dma_device_type": 1 00:30:18.912 }, 00:30:18.912 { 00:30:18.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.912 "dma_device_type": 2 00:30:18.912 } 00:30:18.912 ], 00:30:18.912 "driver_specific": { 00:30:18.912 "raid": { 00:30:18.912 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:18.912 "strip_size_kb": 0, 00:30:18.912 "state": "online", 00:30:18.912 "raid_level": "raid1", 00:30:18.912 "superblock": true, 00:30:18.912 "num_base_bdevs": 2, 00:30:18.912 "num_base_bdevs_discovered": 2, 00:30:18.912 "num_base_bdevs_operational": 2, 00:30:18.912 "base_bdevs_list": [ 00:30:18.912 { 00:30:18.912 "name": "pt1", 00:30:18.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:18.912 "is_configured": true, 00:30:18.912 "data_offset": 256, 00:30:18.912 "data_size": 7936 00:30:18.912 }, 00:30:18.912 { 00:30:18.912 "name": "pt2", 00:30:18.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:18.912 "is_configured": true, 00:30:18.912 "data_offset": 256, 00:30:18.912 "data_size": 7936 00:30:18.912 } 00:30:18.912 ] 00:30:18.913 } 00:30:18.913 } 00:30:18.913 }' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:18.913 pt2' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:18.913 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.913 [2024-10-07 07:51:18.457333] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:19.172 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.172 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=70ad3a62-571f-48c0-bb15-3c2d62ba6093 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 70ad3a62-571f-48c0-bb15-3c2d62ba6093 ']' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 [2024-10-07 07:51:18.497012] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:19.173 [2024-10-07 07:51:18.497145] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:19.173 [2024-10-07 07:51:18.497370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:19.173 [2024-10-07 07:51:18.497545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:19.173 [2024-10-07 07:51:18.497674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # local es=0 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 [2024-10-07 07:51:18.637241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:19.173 request: 00:30:19.173 { 00:30:19.173 "name": "raid_bdev1", 00:30:19.173 [2024-10-07 07:51:18.641143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:19.173 [2024-10-07 07:51:18.641277] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:19.173 [2024-10-07 07:51:18.641372] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:19.173 [2024-10-07 07:51:18.641402] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:19.173 [2024-10-07 07:51:18.641423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:19.173 "raid_level": "raid1", 00:30:19.173 "base_bdevs": [ 00:30:19.173 "malloc1", 00:30:19.173 "malloc2" 00:30:19.173 ], 00:30:19.173 "superblock": false, 00:30:19.173 "method": "bdev_raid_create", 00:30:19.173 "req_id": 1 00:30:19.173 } 00:30:19.173 Got JSON-RPC error response 00:30:19.173 response: 00:30:19.173 { 00:30:19.173 "code": -17, 00:30:19.173 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:19.173 } 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@656 -- # es=1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 [2024-10-07 07:51:18.701466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:19.173 [2024-10-07 07:51:18.701799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.173 [2024-10-07 07:51:18.701888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:19.173 [2024-10-07 07:51:18.702101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.173 [2024-10-07 07:51:18.705907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.173 [2024-10-07 07:51:18.706113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:19.173 [2024-10-07 07:51:18.706374] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:19.173 [2024-10-07 07:51:18.706585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:19.173 pt1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.173 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.433 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:19.433 "name": "raid_bdev1", 00:30:19.433 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:19.433 "strip_size_kb": 0, 00:30:19.433 "state": "configuring", 00:30:19.433 "raid_level": "raid1", 00:30:19.433 "superblock": true, 00:30:19.433 "num_base_bdevs": 2, 00:30:19.433 "num_base_bdevs_discovered": 1, 00:30:19.433 "num_base_bdevs_operational": 2, 00:30:19.433 "base_bdevs_list": [ 00:30:19.433 { 00:30:19.433 "name": "pt1", 00:30:19.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:19.433 "is_configured": true, 00:30:19.433 "data_offset": 256, 00:30:19.433 "data_size": 7936 00:30:19.433 }, 00:30:19.433 { 00:30:19.433 "name": null, 00:30:19.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:19.433 "is_configured": false, 00:30:19.433 "data_offset": 256, 00:30:19.433 "data_size": 7936 00:30:19.433 } 00:30:19.433 ] 00:30:19.433 }' 00:30:19.433 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:19.433 07:51:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.693 [2024-10-07 07:51:19.142626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:19.693 [2024-10-07 07:51:19.142955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.693 [2024-10-07 07:51:19.143025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:19.693 [2024-10-07 07:51:19.143122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.693 [2024-10-07 07:51:19.143410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.693 [2024-10-07 07:51:19.143541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:19.693 [2024-10-07 07:51:19.143627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:19.693 [2024-10-07 07:51:19.143674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:19.693 [2024-10-07 07:51:19.143814] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:19.693 [2024-10-07 07:51:19.143830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:19.693 [2024-10-07 07:51:19.143920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:19.693 [2024-10-07 07:51:19.143998] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:19.693 [2024-10-07 07:51:19.144009] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:19.693 [2024-10-07 07:51:19.144088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:19.693 pt2 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:19.693 "name": "raid_bdev1", 00:30:19.693 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:19.693 "strip_size_kb": 0, 00:30:19.693 "state": "online", 00:30:19.693 "raid_level": "raid1", 00:30:19.693 "superblock": true, 00:30:19.693 "num_base_bdevs": 2, 00:30:19.693 "num_base_bdevs_discovered": 2, 00:30:19.693 "num_base_bdevs_operational": 2, 00:30:19.693 "base_bdevs_list": [ 00:30:19.693 { 00:30:19.693 "name": "pt1", 00:30:19.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:19.693 "is_configured": true, 00:30:19.693 "data_offset": 256, 00:30:19.693 "data_size": 7936 00:30:19.693 }, 00:30:19.693 { 00:30:19.693 "name": "pt2", 00:30:19.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:19.693 "is_configured": true, 00:30:19.693 "data_offset": 256, 00:30:19.693 "data_size": 7936 00:30:19.693 } 00:30:19.693 ] 00:30:19.693 }' 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:19.693 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.261 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.262 [2024-10-07 07:51:19.603057] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.262 "name": "raid_bdev1", 00:30:20.262 "aliases": [ 00:30:20.262 "70ad3a62-571f-48c0-bb15-3c2d62ba6093" 00:30:20.262 ], 00:30:20.262 "product_name": "Raid Volume", 00:30:20.262 "block_size": 4128, 00:30:20.262 "num_blocks": 7936, 00:30:20.262 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:20.262 "md_size": 32, 00:30:20.262 "md_interleave": true, 00:30:20.262 "dif_type": 0, 00:30:20.262 "assigned_rate_limits": { 00:30:20.262 "rw_ios_per_sec": 0, 00:30:20.262 "rw_mbytes_per_sec": 0, 00:30:20.262 "r_mbytes_per_sec": 0, 00:30:20.262 "w_mbytes_per_sec": 0 00:30:20.262 }, 00:30:20.262 "claimed": false, 00:30:20.262 "zoned": false, 00:30:20.262 "supported_io_types": { 00:30:20.262 "read": true, 00:30:20.262 "write": true, 00:30:20.262 "unmap": false, 00:30:20.262 "flush": false, 00:30:20.262 "reset": true, 00:30:20.262 "nvme_admin": false, 00:30:20.262 "nvme_io": false, 00:30:20.262 "nvme_io_md": false, 00:30:20.262 "write_zeroes": true, 00:30:20.262 "zcopy": false, 00:30:20.262 "get_zone_info": false, 00:30:20.262 "zone_management": false, 00:30:20.262 "zone_append": false, 00:30:20.262 "compare": false, 00:30:20.262 "compare_and_write": false, 00:30:20.262 "abort": false, 00:30:20.262 "seek_hole": false, 00:30:20.262 "seek_data": false, 00:30:20.262 "copy": false, 00:30:20.262 "nvme_iov_md": false 00:30:20.262 }, 00:30:20.262 "memory_domains": [ 00:30:20.262 { 00:30:20.262 "dma_device_id": "system", 00:30:20.262 "dma_device_type": 1 00:30:20.262 }, 00:30:20.262 { 00:30:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.262 "dma_device_type": 2 00:30:20.262 }, 00:30:20.262 { 00:30:20.262 "dma_device_id": "system", 00:30:20.262 "dma_device_type": 1 00:30:20.262 }, 00:30:20.262 { 00:30:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.262 "dma_device_type": 2 00:30:20.262 } 00:30:20.262 ], 00:30:20.262 "driver_specific": { 00:30:20.262 "raid": { 00:30:20.262 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:20.262 "strip_size_kb": 0, 00:30:20.262 "state": "online", 00:30:20.262 "raid_level": "raid1", 00:30:20.262 "superblock": true, 00:30:20.262 "num_base_bdevs": 2, 00:30:20.262 "num_base_bdevs_discovered": 2, 00:30:20.262 "num_base_bdevs_operational": 2, 00:30:20.262 "base_bdevs_list": [ 00:30:20.262 { 00:30:20.262 "name": "pt1", 00:30:20.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.262 "is_configured": true, 00:30:20.262 "data_offset": 256, 00:30:20.262 "data_size": 7936 00:30:20.262 }, 00:30:20.262 { 00:30:20.262 "name": "pt2", 00:30:20.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.262 "is_configured": true, 00:30:20.262 "data_offset": 256, 00:30:20.262 "data_size": 7936 00:30:20.262 } 00:30:20.262 ] 00:30:20.262 } 00:30:20.262 } 00:30:20.262 }' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:20.262 pt2' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.262 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.521 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:30:20.521 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:30:20.521 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:20.521 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.521 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:20.522 [2024-10-07 07:51:19.831019] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 70ad3a62-571f-48c0-bb15-3c2d62ba6093 '!=' 70ad3a62-571f-48c0-bb15-3c2d62ba6093 ']' 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.522 [2024-10-07 07:51:19.878857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.522 "name": "raid_bdev1", 00:30:20.522 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:20.522 "strip_size_kb": 0, 00:30:20.522 "state": "online", 00:30:20.522 "raid_level": "raid1", 00:30:20.522 "superblock": true, 00:30:20.522 "num_base_bdevs": 2, 00:30:20.522 "num_base_bdevs_discovered": 1, 00:30:20.522 "num_base_bdevs_operational": 1, 00:30:20.522 "base_bdevs_list": [ 00:30:20.522 { 00:30:20.522 "name": null, 00:30:20.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.522 "is_configured": false, 00:30:20.522 "data_offset": 0, 00:30:20.522 "data_size": 7936 00:30:20.522 }, 00:30:20.522 { 00:30:20.522 "name": "pt2", 00:30:20.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.522 "is_configured": true, 00:30:20.522 "data_offset": 256, 00:30:20.522 "data_size": 7936 00:30:20.522 } 00:30:20.522 ] 00:30:20.522 }' 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.522 07:51:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.781 [2024-10-07 07:51:20.318952] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:20.781 [2024-10-07 07:51:20.319209] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:20.781 [2024-10-07 07:51:20.319356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:20.781 [2024-10-07 07:51:20.319430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:20.781 [2024-10-07 07:51:20.319447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:20.781 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:21.040 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 [2024-10-07 07:51:20.386967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:21.041 [2024-10-07 07:51:20.387258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.041 [2024-10-07 07:51:20.387325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:21.041 [2024-10-07 07:51:20.387423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.041 [2024-10-07 07:51:20.390384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.041 [2024-10-07 07:51:20.390581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:21.041 [2024-10-07 07:51:20.390846] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:21.041 pt2 00:30:21.041 [2024-10-07 07:51:20.391070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:21.041 [2024-10-07 07:51:20.391245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:21.041 [2024-10-07 07:51:20.391268] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:21.041 [2024-10-07 07:51:20.391406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:21.041 [2024-10-07 07:51:20.391489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:21.041 [2024-10-07 07:51:20.391501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:21.041 [2024-10-07 07:51:20.391596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:21.041 "name": "raid_bdev1", 00:30:21.041 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:21.041 "strip_size_kb": 0, 00:30:21.041 "state": "online", 00:30:21.041 "raid_level": "raid1", 00:30:21.041 "superblock": true, 00:30:21.041 "num_base_bdevs": 2, 00:30:21.041 "num_base_bdevs_discovered": 1, 00:30:21.041 "num_base_bdevs_operational": 1, 00:30:21.041 "base_bdevs_list": [ 00:30:21.041 { 00:30:21.041 "name": null, 00:30:21.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.041 "is_configured": false, 00:30:21.041 "data_offset": 256, 00:30:21.041 "data_size": 7936 00:30:21.041 }, 00:30:21.041 { 00:30:21.041 "name": "pt2", 00:30:21.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:21.041 "is_configured": true, 00:30:21.041 "data_offset": 256, 00:30:21.041 "data_size": 7936 00:30:21.041 } 00:30:21.041 ] 00:30:21.041 }' 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:21.041 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.300 [2024-10-07 07:51:20.839085] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:21.300 [2024-10-07 07:51:20.839145] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:21.300 [2024-10-07 07:51:20.839262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.300 [2024-10-07 07:51:20.839335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.300 [2024-10-07 07:51:20.839349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.300 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.559 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.559 [2024-10-07 07:51:20.899141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:21.559 [2024-10-07 07:51:20.899499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.559 [2024-10-07 07:51:20.899634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:30:21.559 [2024-10-07 07:51:20.899737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.559 [2024-10-07 07:51:20.902738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.559 [2024-10-07 07:51:20.902781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:21.559 [2024-10-07 07:51:20.902874] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:21.559 [2024-10-07 07:51:20.902942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:21.559 [2024-10-07 07:51:20.903060] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:21.559 [2024-10-07 07:51:20.903074] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:21.559 [2024-10-07 07:51:20.903103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:30:21.559 [2024-10-07 07:51:20.903176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:21.559 [2024-10-07 07:51:20.903333] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:30:21.559 [2024-10-07 07:51:20.903345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:21.559 [2024-10-07 07:51:20.903434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:21.559 pt1 00:30:21.559 [2024-10-07 07:51:20.903505] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:30:21.560 [2024-10-07 07:51:20.903519] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:30:21.560 [2024-10-07 07:51:20.903603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:21.560 "name": "raid_bdev1", 00:30:21.560 "uuid": "70ad3a62-571f-48c0-bb15-3c2d62ba6093", 00:30:21.560 "strip_size_kb": 0, 00:30:21.560 "state": "online", 00:30:21.560 "raid_level": "raid1", 00:30:21.560 "superblock": true, 00:30:21.560 "num_base_bdevs": 2, 00:30:21.560 "num_base_bdevs_discovered": 1, 00:30:21.560 "num_base_bdevs_operational": 1, 00:30:21.560 "base_bdevs_list": [ 00:30:21.560 { 00:30:21.560 "name": null, 00:30:21.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.560 "is_configured": false, 00:30:21.560 "data_offset": 256, 00:30:21.560 "data_size": 7936 00:30:21.560 }, 00:30:21.560 { 00:30:21.560 "name": "pt2", 00:30:21.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:21.560 "is_configured": true, 00:30:21.560 "data_offset": 256, 00:30:21.560 "data_size": 7936 00:30:21.560 } 00:30:21.560 ] 00:30:21.560 }' 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:21.560 07:51:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.819 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:21.819 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:30:21.819 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:21.819 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.819 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:22.078 [2024-10-07 07:51:21.415535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 70ad3a62-571f-48c0-bb15-3c2d62ba6093 '!=' 70ad3a62-571f-48c0-bb15-3c2d62ba6093 ']' 00:30:22.078 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88992 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' -z 88992 ']' 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # kill -0 88992 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # uname 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 88992 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:22.079 killing process with pid 88992 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # echo 'killing process with pid 88992' 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # kill 88992 00:30:22.079 [2024-10-07 07:51:21.503319] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:22.079 [2024-10-07 07:51:21.503443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:22.079 07:51:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@977 -- # wait 88992 00:30:22.079 [2024-10-07 07:51:21.503505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:22.079 [2024-10-07 07:51:21.503526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:30:22.337 [2024-10-07 07:51:21.720372] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:23.790 07:51:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:30:23.790 00:30:23.790 real 0m6.347s 00:30:23.790 user 0m9.523s 00:30:23.790 sys 0m1.173s 00:30:23.790 ************************************ 00:30:23.790 END TEST raid_superblock_test_md_interleaved 00:30:23.790 ************************************ 00:30:23.790 07:51:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:23.790 07:51:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:23.790 07:51:23 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:30:23.790 07:51:23 bdev_raid -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:30:23.790 07:51:23 bdev_raid -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:23.790 07:51:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:23.790 ************************************ 00:30:23.790 START TEST raid_rebuild_test_sb_md_interleaved 00:30:23.790 ************************************ 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # raid_rebuild_test raid1 2 true false false 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89315 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89315 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # '[' -z 89315 ']' 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:23.790 07:51:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:23.790 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:23.790 Zero copy mechanism will not be used. 00:30:23.790 [2024-10-07 07:51:23.262325] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:23.790 [2024-10-07 07:51:23.262506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89315 ] 00:30:24.063 [2024-10-07 07:51:23.443278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.323 [2024-10-07 07:51:23.657853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.323 [2024-10-07 07:51:23.874239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:24.323 [2024-10-07 07:51:23.874528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@867 -- # return 0 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 BaseBdev1_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 [2024-10-07 07:51:24.253606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:24.920 [2024-10-07 07:51:24.253859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.920 [2024-10-07 07:51:24.253998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:24.920 [2024-10-07 07:51:24.254045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.920 [2024-10-07 07:51:24.256518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.920 [2024-10-07 07:51:24.256572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:24.920 BaseBdev1 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 BaseBdev2_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 [2024-10-07 07:51:24.317240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:24.920 [2024-10-07 07:51:24.317327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.920 [2024-10-07 07:51:24.317356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:24.920 [2024-10-07 07:51:24.317374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.920 [2024-10-07 07:51:24.319680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.920 [2024-10-07 07:51:24.319739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:24.920 BaseBdev2 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 spare_malloc 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.920 spare_delay 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.920 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.921 [2024-10-07 07:51:24.381766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:24.921 [2024-10-07 07:51:24.381841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.921 [2024-10-07 07:51:24.381868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:24.921 [2024-10-07 07:51:24.381885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.921 [2024-10-07 07:51:24.384032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.921 [2024-10-07 07:51:24.384080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:24.921 spare 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.921 [2024-10-07 07:51:24.389834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.921 [2024-10-07 07:51:24.392082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:24.921 [2024-10-07 07:51:24.392428] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:24.921 [2024-10-07 07:51:24.392603] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:24.921 [2024-10-07 07:51:24.392698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:24.921 [2024-10-07 07:51:24.392814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:24.921 [2024-10-07 07:51:24.392826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:24.921 [2024-10-07 07:51:24.392907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.921 "name": "raid_bdev1", 00:30:24.921 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:24.921 "strip_size_kb": 0, 00:30:24.921 "state": "online", 00:30:24.921 "raid_level": "raid1", 00:30:24.921 "superblock": true, 00:30:24.921 "num_base_bdevs": 2, 00:30:24.921 "num_base_bdevs_discovered": 2, 00:30:24.921 "num_base_bdevs_operational": 2, 00:30:24.921 "base_bdevs_list": [ 00:30:24.921 { 00:30:24.921 "name": "BaseBdev1", 00:30:24.921 "uuid": "3b33b78f-c305-5f4e-b61c-5c9faca56f38", 00:30:24.921 "is_configured": true, 00:30:24.921 "data_offset": 256, 00:30:24.921 "data_size": 7936 00:30:24.921 }, 00:30:24.921 { 00:30:24.921 "name": "BaseBdev2", 00:30:24.921 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:24.921 "is_configured": true, 00:30:24.921 "data_offset": 256, 00:30:24.921 "data_size": 7936 00:30:24.921 } 00:30:24.921 ] 00:30:24.921 }' 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.921 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:25.489 [2024-10-07 07:51:24.822214] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:25.489 [2024-10-07 07:51:24.917971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:25.489 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.490 "name": "raid_bdev1", 00:30:25.490 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:25.490 "strip_size_kb": 0, 00:30:25.490 "state": "online", 00:30:25.490 "raid_level": "raid1", 00:30:25.490 "superblock": true, 00:30:25.490 "num_base_bdevs": 2, 00:30:25.490 "num_base_bdevs_discovered": 1, 00:30:25.490 "num_base_bdevs_operational": 1, 00:30:25.490 "base_bdevs_list": [ 00:30:25.490 { 00:30:25.490 "name": null, 00:30:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.490 "is_configured": false, 00:30:25.490 "data_offset": 0, 00:30:25.490 "data_size": 7936 00:30:25.490 }, 00:30:25.490 { 00:30:25.490 "name": "BaseBdev2", 00:30:25.490 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:25.490 "is_configured": true, 00:30:25.490 "data_offset": 256, 00:30:25.490 "data_size": 7936 00:30:25.490 } 00:30:25.490 ] 00:30:25.490 }' 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.490 07:51:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:26.058 07:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.058 07:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:26.058 07:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:26.058 [2024-10-07 07:51:25.350143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.058 [2024-10-07 07:51:25.367144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:26.058 07:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:26.058 07:51:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:26.058 [2024-10-07 07:51:25.369579] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:26.995 "name": "raid_bdev1", 00:30:26.995 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:26.995 "strip_size_kb": 0, 00:30:26.995 "state": "online", 00:30:26.995 "raid_level": "raid1", 00:30:26.995 "superblock": true, 00:30:26.995 "num_base_bdevs": 2, 00:30:26.995 "num_base_bdevs_discovered": 2, 00:30:26.995 "num_base_bdevs_operational": 2, 00:30:26.995 "process": { 00:30:26.995 "type": "rebuild", 00:30:26.995 "target": "spare", 00:30:26.995 "progress": { 00:30:26.995 "blocks": 2560, 00:30:26.995 "percent": 32 00:30:26.995 } 00:30:26.995 }, 00:30:26.995 "base_bdevs_list": [ 00:30:26.995 { 00:30:26.995 "name": "spare", 00:30:26.995 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:26.995 "is_configured": true, 00:30:26.995 "data_offset": 256, 00:30:26.995 "data_size": 7936 00:30:26.995 }, 00:30:26.995 { 00:30:26.995 "name": "BaseBdev2", 00:30:26.995 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:26.995 "is_configured": true, 00:30:26.995 "data_offset": 256, 00:30:26.995 "data_size": 7936 00:30:26.995 } 00:30:26.995 ] 00:30:26.995 }' 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:26.995 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:26.995 [2024-10-07 07:51:26.530955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.254 [2024-10-07 07:51:26.578050] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:27.254 [2024-10-07 07:51:26.578200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.254 [2024-10-07 07:51:26.578242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.254 [2024-10-07 07:51:26.578270] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:27.254 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.254 "name": "raid_bdev1", 00:30:27.254 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:27.254 "strip_size_kb": 0, 00:30:27.254 "state": "online", 00:30:27.254 "raid_level": "raid1", 00:30:27.254 "superblock": true, 00:30:27.254 "num_base_bdevs": 2, 00:30:27.254 "num_base_bdevs_discovered": 1, 00:30:27.254 "num_base_bdevs_operational": 1, 00:30:27.254 "base_bdevs_list": [ 00:30:27.254 { 00:30:27.254 "name": null, 00:30:27.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.255 "is_configured": false, 00:30:27.255 "data_offset": 0, 00:30:27.255 "data_size": 7936 00:30:27.255 }, 00:30:27.255 { 00:30:27.255 "name": "BaseBdev2", 00:30:27.255 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:27.255 "is_configured": true, 00:30:27.255 "data_offset": 256, 00:30:27.255 "data_size": 7936 00:30:27.255 } 00:30:27.255 ] 00:30:27.255 }' 00:30:27.255 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.255 07:51:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:27.513 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.514 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:27.514 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:27.514 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:27.773 "name": "raid_bdev1", 00:30:27.773 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:27.773 "strip_size_kb": 0, 00:30:27.773 "state": "online", 00:30:27.773 "raid_level": "raid1", 00:30:27.773 "superblock": true, 00:30:27.773 "num_base_bdevs": 2, 00:30:27.773 "num_base_bdevs_discovered": 1, 00:30:27.773 "num_base_bdevs_operational": 1, 00:30:27.773 "base_bdevs_list": [ 00:30:27.773 { 00:30:27.773 "name": null, 00:30:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.773 "is_configured": false, 00:30:27.773 "data_offset": 0, 00:30:27.773 "data_size": 7936 00:30:27.773 }, 00:30:27.773 { 00:30:27.773 "name": "BaseBdev2", 00:30:27.773 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:27.773 "is_configured": true, 00:30:27.773 "data_offset": 256, 00:30:27.773 "data_size": 7936 00:30:27.773 } 00:30:27.773 ] 00:30:27.773 }' 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:27.773 [2024-10-07 07:51:27.203724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:27.773 [2024-10-07 07:51:27.220914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:27.773 07:51:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:27.773 [2024-10-07 07:51:27.223047] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:28.713 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:28.972 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:28.972 "name": "raid_bdev1", 00:30:28.972 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:28.972 "strip_size_kb": 0, 00:30:28.972 "state": "online", 00:30:28.972 "raid_level": "raid1", 00:30:28.972 "superblock": true, 00:30:28.973 "num_base_bdevs": 2, 00:30:28.973 "num_base_bdevs_discovered": 2, 00:30:28.973 "num_base_bdevs_operational": 2, 00:30:28.973 "process": { 00:30:28.973 "type": "rebuild", 00:30:28.973 "target": "spare", 00:30:28.973 "progress": { 00:30:28.973 "blocks": 2560, 00:30:28.973 "percent": 32 00:30:28.973 } 00:30:28.973 }, 00:30:28.973 "base_bdevs_list": [ 00:30:28.973 { 00:30:28.973 "name": "spare", 00:30:28.973 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:28.973 "is_configured": true, 00:30:28.973 "data_offset": 256, 00:30:28.973 "data_size": 7936 00:30:28.973 }, 00:30:28.973 { 00:30:28.973 "name": "BaseBdev2", 00:30:28.973 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:28.973 "is_configured": true, 00:30:28.973 "data_offset": 256, 00:30:28.973 "data_size": 7936 00:30:28.973 } 00:30:28.973 ] 00:30:28.973 }' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:28.973 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=778 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:28.973 "name": "raid_bdev1", 00:30:28.973 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:28.973 "strip_size_kb": 0, 00:30:28.973 "state": "online", 00:30:28.973 "raid_level": "raid1", 00:30:28.973 "superblock": true, 00:30:28.973 "num_base_bdevs": 2, 00:30:28.973 "num_base_bdevs_discovered": 2, 00:30:28.973 "num_base_bdevs_operational": 2, 00:30:28.973 "process": { 00:30:28.973 "type": "rebuild", 00:30:28.973 "target": "spare", 00:30:28.973 "progress": { 00:30:28.973 "blocks": 2816, 00:30:28.973 "percent": 35 00:30:28.973 } 00:30:28.973 }, 00:30:28.973 "base_bdevs_list": [ 00:30:28.973 { 00:30:28.973 "name": "spare", 00:30:28.973 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:28.973 "is_configured": true, 00:30:28.973 "data_offset": 256, 00:30:28.973 "data_size": 7936 00:30:28.973 }, 00:30:28.973 { 00:30:28.973 "name": "BaseBdev2", 00:30:28.973 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:28.973 "is_configured": true, 00:30:28.973 "data_offset": 256, 00:30:28.973 "data_size": 7936 00:30:28.973 } 00:30:28.973 ] 00:30:28.973 }' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:28.973 07:51:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:30.350 "name": "raid_bdev1", 00:30:30.350 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:30.350 "strip_size_kb": 0, 00:30:30.350 "state": "online", 00:30:30.350 "raid_level": "raid1", 00:30:30.350 "superblock": true, 00:30:30.350 "num_base_bdevs": 2, 00:30:30.350 "num_base_bdevs_discovered": 2, 00:30:30.350 "num_base_bdevs_operational": 2, 00:30:30.350 "process": { 00:30:30.350 "type": "rebuild", 00:30:30.350 "target": "spare", 00:30:30.350 "progress": { 00:30:30.350 "blocks": 5632, 00:30:30.350 "percent": 70 00:30:30.350 } 00:30:30.350 }, 00:30:30.350 "base_bdevs_list": [ 00:30:30.350 { 00:30:30.350 "name": "spare", 00:30:30.350 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:30.350 "is_configured": true, 00:30:30.350 "data_offset": 256, 00:30:30.350 "data_size": 7936 00:30:30.350 }, 00:30:30.350 { 00:30:30.350 "name": "BaseBdev2", 00:30:30.350 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:30.350 "is_configured": true, 00:30:30.350 "data_offset": 256, 00:30:30.350 "data_size": 7936 00:30:30.350 } 00:30:30.350 ] 00:30:30.350 }' 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.350 07:51:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:30.946 [2024-10-07 07:51:30.344304] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:30.946 [2024-10-07 07:51:30.344384] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:30.946 [2024-10-07 07:51:30.344539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:31.205 "name": "raid_bdev1", 00:30:31.205 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:31.205 "strip_size_kb": 0, 00:30:31.205 "state": "online", 00:30:31.205 "raid_level": "raid1", 00:30:31.205 "superblock": true, 00:30:31.205 "num_base_bdevs": 2, 00:30:31.205 "num_base_bdevs_discovered": 2, 00:30:31.205 "num_base_bdevs_operational": 2, 00:30:31.205 "base_bdevs_list": [ 00:30:31.205 { 00:30:31.205 "name": "spare", 00:30:31.205 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:31.205 "is_configured": true, 00:30:31.205 "data_offset": 256, 00:30:31.205 "data_size": 7936 00:30:31.205 }, 00:30:31.205 { 00:30:31.205 "name": "BaseBdev2", 00:30:31.205 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:31.205 "is_configured": true, 00:30:31.205 "data_offset": 256, 00:30:31.205 "data_size": 7936 00:30:31.205 } 00:30:31.205 ] 00:30:31.205 }' 00:30:31.205 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:31.464 "name": "raid_bdev1", 00:30:31.464 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:31.464 "strip_size_kb": 0, 00:30:31.464 "state": "online", 00:30:31.464 "raid_level": "raid1", 00:30:31.464 "superblock": true, 00:30:31.464 "num_base_bdevs": 2, 00:30:31.464 "num_base_bdevs_discovered": 2, 00:30:31.464 "num_base_bdevs_operational": 2, 00:30:31.464 "base_bdevs_list": [ 00:30:31.464 { 00:30:31.464 "name": "spare", 00:30:31.464 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:31.464 "is_configured": true, 00:30:31.464 "data_offset": 256, 00:30:31.464 "data_size": 7936 00:30:31.464 }, 00:30:31.464 { 00:30:31.464 "name": "BaseBdev2", 00:30:31.464 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:31.464 "is_configured": true, 00:30:31.464 "data_offset": 256, 00:30:31.464 "data_size": 7936 00:30:31.464 } 00:30:31.464 ] 00:30:31.464 }' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.464 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.465 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.465 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:31.465 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:31.465 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:31.465 07:51:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.465 "name": "raid_bdev1", 00:30:31.465 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:31.465 "strip_size_kb": 0, 00:30:31.465 "state": "online", 00:30:31.465 "raid_level": "raid1", 00:30:31.465 "superblock": true, 00:30:31.465 "num_base_bdevs": 2, 00:30:31.465 "num_base_bdevs_discovered": 2, 00:30:31.465 "num_base_bdevs_operational": 2, 00:30:31.465 "base_bdevs_list": [ 00:30:31.465 { 00:30:31.465 "name": "spare", 00:30:31.465 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:31.465 "is_configured": true, 00:30:31.465 "data_offset": 256, 00:30:31.465 "data_size": 7936 00:30:31.465 }, 00:30:31.465 { 00:30:31.465 "name": "BaseBdev2", 00:30:31.465 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:31.465 "is_configured": true, 00:30:31.465 "data_offset": 256, 00:30:31.465 "data_size": 7936 00:30:31.465 } 00:30:31.465 ] 00:30:31.465 }' 00:30:31.465 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.465 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.031 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:32.031 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.031 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.031 [2024-10-07 07:51:31.415946] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:32.031 [2024-10-07 07:51:31.415984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:32.031 [2024-10-07 07:51:31.416077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.032 [2024-10-07 07:51:31.416152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:32.032 [2024-10-07 07:51:31.416165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.032 [2024-10-07 07:51:31.483935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:32.032 [2024-10-07 07:51:31.484010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:32.032 [2024-10-07 07:51:31.484042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:32.032 [2024-10-07 07:51:31.484058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:32.032 [2024-10-07 07:51:31.486512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:32.032 spare 00:30:32.032 [2024-10-07 07:51:31.486692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:32.032 [2024-10-07 07:51:31.486807] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:32.032 [2024-10-07 07:51:31.486882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:32.032 [2024-10-07 07:51:31.487018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.032 [2024-10-07 07:51:31.587125] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:32.032 [2024-10-07 07:51:31.587180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:32.032 [2024-10-07 07:51:31.587323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:32.032 [2024-10-07 07:51:31.587447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:32.032 [2024-10-07 07:51:31.587458] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:32.032 [2024-10-07 07:51:31.587574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.032 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.290 "name": "raid_bdev1", 00:30:32.290 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:32.290 "strip_size_kb": 0, 00:30:32.290 "state": "online", 00:30:32.290 "raid_level": "raid1", 00:30:32.290 "superblock": true, 00:30:32.290 "num_base_bdevs": 2, 00:30:32.290 "num_base_bdevs_discovered": 2, 00:30:32.290 "num_base_bdevs_operational": 2, 00:30:32.290 "base_bdevs_list": [ 00:30:32.290 { 00:30:32.290 "name": "spare", 00:30:32.290 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:32.290 "is_configured": true, 00:30:32.290 "data_offset": 256, 00:30:32.290 "data_size": 7936 00:30:32.290 }, 00:30:32.290 { 00:30:32.290 "name": "BaseBdev2", 00:30:32.290 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:32.290 "is_configured": true, 00:30:32.290 "data_offset": 256, 00:30:32.290 "data_size": 7936 00:30:32.290 } 00:30:32.290 ] 00:30:32.290 }' 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.290 07:51:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:32.550 "name": "raid_bdev1", 00:30:32.550 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:32.550 "strip_size_kb": 0, 00:30:32.550 "state": "online", 00:30:32.550 "raid_level": "raid1", 00:30:32.550 "superblock": true, 00:30:32.550 "num_base_bdevs": 2, 00:30:32.550 "num_base_bdevs_discovered": 2, 00:30:32.550 "num_base_bdevs_operational": 2, 00:30:32.550 "base_bdevs_list": [ 00:30:32.550 { 00:30:32.550 "name": "spare", 00:30:32.550 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:32.550 "is_configured": true, 00:30:32.550 "data_offset": 256, 00:30:32.550 "data_size": 7936 00:30:32.550 }, 00:30:32.550 { 00:30:32.550 "name": "BaseBdev2", 00:30:32.550 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:32.550 "is_configured": true, 00:30:32.550 "data_offset": 256, 00:30:32.550 "data_size": 7936 00:30:32.550 } 00:30:32.550 ] 00:30:32.550 }' 00:30:32.550 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.810 [2024-10-07 07:51:32.220212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.810 "name": "raid_bdev1", 00:30:32.810 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:32.810 "strip_size_kb": 0, 00:30:32.810 "state": "online", 00:30:32.810 "raid_level": "raid1", 00:30:32.810 "superblock": true, 00:30:32.810 "num_base_bdevs": 2, 00:30:32.810 "num_base_bdevs_discovered": 1, 00:30:32.810 "num_base_bdevs_operational": 1, 00:30:32.810 "base_bdevs_list": [ 00:30:32.810 { 00:30:32.810 "name": null, 00:30:32.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.810 "is_configured": false, 00:30:32.810 "data_offset": 0, 00:30:32.810 "data_size": 7936 00:30:32.810 }, 00:30:32.810 { 00:30:32.810 "name": "BaseBdev2", 00:30:32.810 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:32.810 "is_configured": true, 00:30:32.810 "data_offset": 256, 00:30:32.810 "data_size": 7936 00:30:32.810 } 00:30:32.810 ] 00:30:32.810 }' 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.810 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:33.379 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:33.379 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:33.379 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:33.379 [2024-10-07 07:51:32.684356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:33.379 [2024-10-07 07:51:32.684746] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:33.379 [2024-10-07 07:51:32.684778] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:33.379 [2024-10-07 07:51:32.684826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:33.379 [2024-10-07 07:51:32.700571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:33.379 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:33.379 07:51:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:33.379 [2024-10-07 07:51:32.702862] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:34.317 "name": "raid_bdev1", 00:30:34.317 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:34.317 "strip_size_kb": 0, 00:30:34.317 "state": "online", 00:30:34.317 "raid_level": "raid1", 00:30:34.317 "superblock": true, 00:30:34.317 "num_base_bdevs": 2, 00:30:34.317 "num_base_bdevs_discovered": 2, 00:30:34.317 "num_base_bdevs_operational": 2, 00:30:34.317 "process": { 00:30:34.317 "type": "rebuild", 00:30:34.317 "target": "spare", 00:30:34.317 "progress": { 00:30:34.317 "blocks": 2560, 00:30:34.317 "percent": 32 00:30:34.317 } 00:30:34.317 }, 00:30:34.317 "base_bdevs_list": [ 00:30:34.317 { 00:30:34.317 "name": "spare", 00:30:34.317 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:34.317 "is_configured": true, 00:30:34.317 "data_offset": 256, 00:30:34.317 "data_size": 7936 00:30:34.317 }, 00:30:34.317 { 00:30:34.317 "name": "BaseBdev2", 00:30:34.317 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:34.317 "is_configured": true, 00:30:34.317 "data_offset": 256, 00:30:34.317 "data_size": 7936 00:30:34.317 } 00:30:34.317 ] 00:30:34.317 }' 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:34.317 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:34.317 [2024-10-07 07:51:33.856629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:34.576 [2024-10-07 07:51:33.911139] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:34.576 [2024-10-07 07:51:33.911393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:34.576 [2024-10-07 07:51:33.911418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:34.576 [2024-10-07 07:51:33.911436] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.576 "name": "raid_bdev1", 00:30:34.576 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:34.576 "strip_size_kb": 0, 00:30:34.576 "state": "online", 00:30:34.576 "raid_level": "raid1", 00:30:34.576 "superblock": true, 00:30:34.576 "num_base_bdevs": 2, 00:30:34.576 "num_base_bdevs_discovered": 1, 00:30:34.576 "num_base_bdevs_operational": 1, 00:30:34.576 "base_bdevs_list": [ 00:30:34.576 { 00:30:34.576 "name": null, 00:30:34.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.576 "is_configured": false, 00:30:34.576 "data_offset": 0, 00:30:34.576 "data_size": 7936 00:30:34.576 }, 00:30:34.576 { 00:30:34.576 "name": "BaseBdev2", 00:30:34.576 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:34.576 "is_configured": true, 00:30:34.576 "data_offset": 256, 00:30:34.576 "data_size": 7936 00:30:34.576 } 00:30:34.576 ] 00:30:34.576 }' 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.576 07:51:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:35.145 07:51:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:35.145 07:51:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:35.145 07:51:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:35.145 [2024-10-07 07:51:34.406309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:35.145 [2024-10-07 07:51:34.406396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.145 [2024-10-07 07:51:34.406428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:35.145 [2024-10-07 07:51:34.406446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.145 [2024-10-07 07:51:34.406684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.145 [2024-10-07 07:51:34.406721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:35.145 [2024-10-07 07:51:34.406807] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:35.145 [2024-10-07 07:51:34.406825] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:35.145 [2024-10-07 07:51:34.406838] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:35.145 [2024-10-07 07:51:34.406869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:35.145 [2024-10-07 07:51:34.422595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:35.145 spare 00:30:35.145 07:51:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:35.145 07:51:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:35.145 [2024-10-07 07:51:34.425058] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:36.149 "name": "raid_bdev1", 00:30:36.149 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:36.149 "strip_size_kb": 0, 00:30:36.149 "state": "online", 00:30:36.149 "raid_level": "raid1", 00:30:36.149 "superblock": true, 00:30:36.149 "num_base_bdevs": 2, 00:30:36.149 "num_base_bdevs_discovered": 2, 00:30:36.149 "num_base_bdevs_operational": 2, 00:30:36.149 "process": { 00:30:36.149 "type": "rebuild", 00:30:36.149 "target": "spare", 00:30:36.149 "progress": { 00:30:36.149 "blocks": 2560, 00:30:36.149 "percent": 32 00:30:36.149 } 00:30:36.149 }, 00:30:36.149 "base_bdevs_list": [ 00:30:36.149 { 00:30:36.149 "name": "spare", 00:30:36.149 "uuid": "07897046-d4db-5a0e-a7cd-312ae721042f", 00:30:36.149 "is_configured": true, 00:30:36.149 "data_offset": 256, 00:30:36.149 "data_size": 7936 00:30:36.149 }, 00:30:36.149 { 00:30:36.149 "name": "BaseBdev2", 00:30:36.149 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:36.149 "is_configured": true, 00:30:36.149 "data_offset": 256, 00:30:36.149 "data_size": 7936 00:30:36.149 } 00:30:36.149 ] 00:30:36.149 }' 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.149 [2024-10-07 07:51:35.598346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:36.149 [2024-10-07 07:51:35.633489] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:36.149 [2024-10-07 07:51:35.633570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.149 [2024-10-07 07:51:35.633594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:36.149 [2024-10-07 07:51:35.633605] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.149 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.408 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.408 "name": "raid_bdev1", 00:30:36.408 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:36.408 "strip_size_kb": 0, 00:30:36.408 "state": "online", 00:30:36.408 "raid_level": "raid1", 00:30:36.408 "superblock": true, 00:30:36.408 "num_base_bdevs": 2, 00:30:36.408 "num_base_bdevs_discovered": 1, 00:30:36.408 "num_base_bdevs_operational": 1, 00:30:36.408 "base_bdevs_list": [ 00:30:36.408 { 00:30:36.408 "name": null, 00:30:36.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.408 "is_configured": false, 00:30:36.409 "data_offset": 0, 00:30:36.409 "data_size": 7936 00:30:36.409 }, 00:30:36.409 { 00:30:36.409 "name": "BaseBdev2", 00:30:36.409 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:36.409 "is_configured": true, 00:30:36.409 "data_offset": 256, 00:30:36.409 "data_size": 7936 00:30:36.409 } 00:30:36.409 ] 00:30:36.409 }' 00:30:36.409 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.409 07:51:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:36.667 "name": "raid_bdev1", 00:30:36.667 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:36.667 "strip_size_kb": 0, 00:30:36.667 "state": "online", 00:30:36.667 "raid_level": "raid1", 00:30:36.667 "superblock": true, 00:30:36.667 "num_base_bdevs": 2, 00:30:36.667 "num_base_bdevs_discovered": 1, 00:30:36.667 "num_base_bdevs_operational": 1, 00:30:36.667 "base_bdevs_list": [ 00:30:36.667 { 00:30:36.667 "name": null, 00:30:36.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.667 "is_configured": false, 00:30:36.667 "data_offset": 0, 00:30:36.667 "data_size": 7936 00:30:36.667 }, 00:30:36.667 { 00:30:36.667 "name": "BaseBdev2", 00:30:36.667 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:36.667 "is_configured": true, 00:30:36.667 "data_offset": 256, 00:30:36.667 "data_size": 7936 00:30:36.667 } 00:30:36.667 ] 00:30:36.667 }' 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:36.667 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:36.925 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:36.925 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:36.925 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.925 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.925 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.926 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:36.926 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:36.926 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.926 [2024-10-07 07:51:36.268289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:36.926 [2024-10-07 07:51:36.268547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.926 [2024-10-07 07:51:36.268598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:36.926 [2024-10-07 07:51:36.268618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.926 [2024-10-07 07:51:36.268865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.926 [2024-10-07 07:51:36.268887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:36.926 [2024-10-07 07:51:36.268962] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:36.926 [2024-10-07 07:51:36.268980] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:36.926 [2024-10-07 07:51:36.268996] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:36.926 [2024-10-07 07:51:36.269011] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:36.926 BaseBdev1 00:30:36.926 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:36.926 07:51:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.861 "name": "raid_bdev1", 00:30:37.861 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:37.861 "strip_size_kb": 0, 00:30:37.861 "state": "online", 00:30:37.861 "raid_level": "raid1", 00:30:37.861 "superblock": true, 00:30:37.861 "num_base_bdevs": 2, 00:30:37.861 "num_base_bdevs_discovered": 1, 00:30:37.861 "num_base_bdevs_operational": 1, 00:30:37.861 "base_bdevs_list": [ 00:30:37.861 { 00:30:37.861 "name": null, 00:30:37.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.861 "is_configured": false, 00:30:37.861 "data_offset": 0, 00:30:37.861 "data_size": 7936 00:30:37.861 }, 00:30:37.861 { 00:30:37.861 "name": "BaseBdev2", 00:30:37.861 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:37.861 "is_configured": true, 00:30:37.861 "data_offset": 256, 00:30:37.861 "data_size": 7936 00:30:37.861 } 00:30:37.861 ] 00:30:37.861 }' 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.861 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:38.427 "name": "raid_bdev1", 00:30:38.427 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:38.427 "strip_size_kb": 0, 00:30:38.427 "state": "online", 00:30:38.427 "raid_level": "raid1", 00:30:38.427 "superblock": true, 00:30:38.427 "num_base_bdevs": 2, 00:30:38.427 "num_base_bdevs_discovered": 1, 00:30:38.427 "num_base_bdevs_operational": 1, 00:30:38.427 "base_bdevs_list": [ 00:30:38.427 { 00:30:38.427 "name": null, 00:30:38.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.427 "is_configured": false, 00:30:38.427 "data_offset": 0, 00:30:38.427 "data_size": 7936 00:30:38.427 }, 00:30:38.427 { 00:30:38.427 "name": "BaseBdev2", 00:30:38.427 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:38.427 "is_configured": true, 00:30:38.427 "data_offset": 256, 00:30:38.427 "data_size": 7936 00:30:38.427 } 00:30:38.427 ] 00:30:38.427 }' 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # local es=0 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # local arg=rpc_cmd 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@645 -- # type -t rpc_cmd 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@645 -- # case "$(type -t "$arg")" in 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@656 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:38.427 [2024-10-07 07:51:37.892784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.427 request: 00:30:38.427 { 00:30:38.427 "base_bdev": "BaseBdev1", 00:30:38.427 "raid_bdev": "raid_bdev1", 00:30:38.427 [2024-10-07 07:51:37.894002] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:38.427 [2024-10-07 07:51:37.894043] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:38.427 "method": "bdev_raid_add_base_bdev", 00:30:38.427 "req_id": 1 00:30:38.427 } 00:30:38.427 Got JSON-RPC error response 00:30:38.427 response: 00:30:38.427 { 00:30:38.427 "code": -22, 00:30:38.427 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:38.427 } 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 1 == 0 ]] 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@656 -- # es=1 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@664 -- # (( es > 128 )) 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # [[ -n '' ]] 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@680 -- # (( !es == 0 )) 00:30:38.427 07:51:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:39.362 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:39.620 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:39.621 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.621 "name": "raid_bdev1", 00:30:39.621 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:39.621 "strip_size_kb": 0, 00:30:39.621 "state": "online", 00:30:39.621 "raid_level": "raid1", 00:30:39.621 "superblock": true, 00:30:39.621 "num_base_bdevs": 2, 00:30:39.621 "num_base_bdevs_discovered": 1, 00:30:39.621 "num_base_bdevs_operational": 1, 00:30:39.621 "base_bdevs_list": [ 00:30:39.621 { 00:30:39.621 "name": null, 00:30:39.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.621 "is_configured": false, 00:30:39.621 "data_offset": 0, 00:30:39.621 "data_size": 7936 00:30:39.621 }, 00:30:39.621 { 00:30:39.621 "name": "BaseBdev2", 00:30:39.621 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:39.621 "is_configured": true, 00:30:39.621 "data_offset": 256, 00:30:39.621 "data_size": 7936 00:30:39.621 } 00:30:39.621 ] 00:30:39.621 }' 00:30:39.621 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.621 07:51:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:39.879 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:39.880 "name": "raid_bdev1", 00:30:39.880 "uuid": "c8bf2f45-3aff-41cc-8237-744036c7a926", 00:30:39.880 "strip_size_kb": 0, 00:30:39.880 "state": "online", 00:30:39.880 "raid_level": "raid1", 00:30:39.880 "superblock": true, 00:30:39.880 "num_base_bdevs": 2, 00:30:39.880 "num_base_bdevs_discovered": 1, 00:30:39.880 "num_base_bdevs_operational": 1, 00:30:39.880 "base_bdevs_list": [ 00:30:39.880 { 00:30:39.880 "name": null, 00:30:39.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.880 "is_configured": false, 00:30:39.880 "data_offset": 0, 00:30:39.880 "data_size": 7936 00:30:39.880 }, 00:30:39.880 { 00:30:39.880 "name": "BaseBdev2", 00:30:39.880 "uuid": "25abb096-79ad-54f1-9034-98d1a144374e", 00:30:39.880 "is_configured": true, 00:30:39.880 "data_offset": 256, 00:30:39.880 "data_size": 7936 00:30:39.880 } 00:30:39.880 ] 00:30:39.880 }' 00:30:39.880 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89315 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' -z 89315 ']' 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # kill -0 89315 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # uname 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:40.138 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 89315 00:30:40.138 killing process with pid 89315 00:30:40.138 Received shutdown signal, test time was about 60.000000 seconds 00:30:40.138 00:30:40.138 Latency(us) 00:30:40.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.138 =================================================================================================================== 00:30:40.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:40.139 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:40.139 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:40.139 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # echo 'killing process with pid 89315' 00:30:40.139 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # kill 89315 00:30:40.139 [2024-10-07 07:51:39.552701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:40.139 [2024-10-07 07:51:39.552869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:40.139 [2024-10-07 07:51:39.552927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:40.139 [2024-10-07 07:51:39.552946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:40.139 07:51:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@977 -- # wait 89315 00:30:40.433 [2024-10-07 07:51:39.865870] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:41.807 07:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:30:41.807 00:30:41.807 real 0m18.086s 00:30:41.807 user 0m23.805s 00:30:41.807 sys 0m1.787s 00:30:41.807 07:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:41.807 07:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:41.807 ************************************ 00:30:41.807 END TEST raid_rebuild_test_sb_md_interleaved 00:30:41.807 ************************************ 00:30:41.807 07:51:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:30:41.807 07:51:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:30:41.807 07:51:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89315 ']' 00:30:41.807 07:51:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89315 00:30:41.807 07:51:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:30:41.807 ************************************ 00:30:41.807 END TEST bdev_raid 00:30:41.807 ************************************ 00:30:41.807 00:30:41.807 real 12m40.647s 00:30:41.807 user 17m4.973s 00:30:41.807 sys 2m3.047s 00:30:41.807 07:51:41 bdev_raid -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:41.807 07:51:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:41.807 07:51:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:30:41.807 07:51:41 -- common/autotest_common.sh@1104 -- # '[' 2 -le 1 ']' 00:30:41.807 07:51:41 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:41.807 07:51:41 -- common/autotest_common.sh@10 -- # set +x 00:30:41.807 ************************************ 00:30:41.807 START TEST spdkcli_raid 00:30:41.807 ************************************ 00:30:41.807 07:51:41 spdkcli_raid -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:30:42.066 * Looking for test storage... 00:30:42.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1626 -- # lcov --version 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.066 07:51:41 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:30:42.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.066 --rc genhtml_branch_coverage=1 00:30:42.066 --rc genhtml_function_coverage=1 00:30:42.066 --rc genhtml_legend=1 00:30:42.066 --rc geninfo_all_blocks=1 00:30:42.066 --rc geninfo_unexecuted_blocks=1 00:30:42.066 00:30:42.066 ' 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:30:42.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.066 --rc genhtml_branch_coverage=1 00:30:42.066 --rc genhtml_function_coverage=1 00:30:42.066 --rc genhtml_legend=1 00:30:42.066 --rc geninfo_all_blocks=1 00:30:42.066 --rc geninfo_unexecuted_blocks=1 00:30:42.066 00:30:42.066 ' 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:30:42.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.066 --rc genhtml_branch_coverage=1 00:30:42.066 --rc genhtml_function_coverage=1 00:30:42.066 --rc genhtml_legend=1 00:30:42.066 --rc geninfo_all_blocks=1 00:30:42.066 --rc geninfo_unexecuted_blocks=1 00:30:42.066 00:30:42.066 ' 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:30:42.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.066 --rc genhtml_branch_coverage=1 00:30:42.066 --rc genhtml_function_coverage=1 00:30:42.066 --rc genhtml_legend=1 00:30:42.066 --rc geninfo_all_blocks=1 00:30:42.066 --rc geninfo_unexecuted_blocks=1 00:30:42.066 00:30:42.066 ' 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:42.066 07:51:41 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:42.066 07:51:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90003 00:30:42.066 07:51:41 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:30:42.067 07:51:41 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90003 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@834 -- # '[' -z 90003 ']' 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:42.067 07:51:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:42.325 [2024-10-07 07:51:41.691476] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:42.325 [2024-10-07 07:51:41.691787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90003 ] 00:30:42.325 [2024-10-07 07:51:41.862359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:42.582 [2024-10-07 07:51:42.127653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.582 [2024-10-07 07:51:42.127659] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.516 07:51:43 spdkcli_raid -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:43.516 07:51:43 spdkcli_raid -- common/autotest_common.sh@867 -- # return 0 00:30:43.516 07:51:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:30:43.516 07:51:43 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:43.516 07:51:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:43.774 07:51:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:30:43.774 07:51:43 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:43.774 07:51:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:43.774 07:51:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:43.774 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:43.774 ' 00:30:45.149 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:30:45.149 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:30:45.407 07:51:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:30:45.407 07:51:44 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:45.407 07:51:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:45.407 07:51:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:30:45.407 07:51:44 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:45.407 07:51:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:45.407 07:51:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:30:45.407 ' 00:30:46.343 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:30:46.601 07:51:45 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:30:46.601 07:51:45 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:46.601 07:51:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:46.601 07:51:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:30:46.601 07:51:46 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:46.601 07:51:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:46.601 07:51:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:30:46.601 07:51:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:30:47.168 07:51:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:30:47.168 07:51:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:30:47.168 07:51:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:30:47.168 07:51:46 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:47.168 07:51:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:47.168 07:51:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:30:47.168 07:51:46 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:47.168 07:51:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:47.168 07:51:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:30:47.168 ' 00:30:48.544 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:30:48.545 07:51:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:30:48.545 07:51:47 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:48.545 07:51:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:48.545 07:51:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:30:48.545 07:51:47 spdkcli_raid -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:48.545 07:51:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:48.545 07:51:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:30:48.545 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:30:48.545 ' 00:30:49.915 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:30:49.915 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:30:49.915 07:51:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:30:49.915 07:51:49 spdkcli_raid -- common/autotest_common.sh@733 -- # xtrace_disable 00:30:49.915 07:51:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:49.915 07:51:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90003 00:30:49.915 07:51:49 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' -z 90003 ']' 00:30:49.915 07:51:49 spdkcli_raid -- common/autotest_common.sh@957 -- # kill -0 90003 00:30:49.915 07:51:49 spdkcli_raid -- common/autotest_common.sh@958 -- # uname 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 90003 00:30:50.173 killing process with pid 90003 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@971 -- # echo 'killing process with pid 90003' 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@972 -- # kill 90003 00:30:50.173 07:51:49 spdkcli_raid -- common/autotest_common.sh@977 -- # wait 90003 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90003 ']' 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90003 00:30:52.702 07:51:52 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' -z 90003 ']' 00:30:52.702 07:51:52 spdkcli_raid -- common/autotest_common.sh@957 -- # kill -0 90003 00:30:52.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 957: kill: (90003) - No such process 00:30:52.702 Process with pid 90003 is not found 00:30:52.702 07:51:52 spdkcli_raid -- common/autotest_common.sh@980 -- # echo 'Process with pid 90003 is not found' 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:52.702 07:51:52 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:52.702 ************************************ 00:30:52.702 END TEST spdkcli_raid 00:30:52.702 ************************************ 00:30:52.702 00:30:52.702 real 0m10.839s 00:30:52.702 user 0m22.038s 00:30:52.702 sys 0m1.189s 00:30:52.702 07:51:52 spdkcli_raid -- common/autotest_common.sh@1129 -- # xtrace_disable 00:30:52.702 07:51:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:30:52.702 07:51:52 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:30:52.702 07:51:52 -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:30:52.702 07:51:52 -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:52.702 07:51:52 -- common/autotest_common.sh@10 -- # set +x 00:30:52.702 ************************************ 00:30:52.702 START TEST blockdev_raid5f 00:30:52.702 ************************************ 00:30:52.702 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:30:52.962 * Looking for test storage... 00:30:52.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:52.962 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1625 -- # [[ y == y ]] 00:30:52.962 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1626 -- # lcov --version 00:30:52.962 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1626 -- # awk '{print $NF}' 00:30:52.962 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1626 -- # lt 1.15 2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.962 07:51:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.963 07:51:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.963 07:51:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1627 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1639 -- # export 'LCOV_OPTS= 00:30:52.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.963 --rc genhtml_branch_coverage=1 00:30:52.963 --rc genhtml_function_coverage=1 00:30:52.963 --rc genhtml_legend=1 00:30:52.963 --rc geninfo_all_blocks=1 00:30:52.963 --rc geninfo_unexecuted_blocks=1 00:30:52.963 00:30:52.963 ' 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1639 -- # LCOV_OPTS=' 00:30:52.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.963 --rc genhtml_branch_coverage=1 00:30:52.963 --rc genhtml_function_coverage=1 00:30:52.963 --rc genhtml_legend=1 00:30:52.963 --rc geninfo_all_blocks=1 00:30:52.963 --rc geninfo_unexecuted_blocks=1 00:30:52.963 00:30:52.963 ' 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1640 -- # export 'LCOV=lcov 00:30:52.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.963 --rc genhtml_branch_coverage=1 00:30:52.963 --rc genhtml_function_coverage=1 00:30:52.963 --rc genhtml_legend=1 00:30:52.963 --rc geninfo_all_blocks=1 00:30:52.963 --rc geninfo_unexecuted_blocks=1 00:30:52.963 00:30:52.963 ' 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@1640 -- # LCOV='lcov 00:30:52.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.963 --rc genhtml_branch_coverage=1 00:30:52.963 --rc genhtml_function_coverage=1 00:30:52.963 --rc genhtml_legend=1 00:30:52.963 --rc geninfo_all_blocks=1 00:30:52.963 --rc geninfo_unexecuted_blocks=1 00:30:52.963 00:30:52.963 ' 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90297 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:52.963 07:51:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90297 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@834 -- # '[' -z 90297 ']' 00:30:52.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local max_retries=100 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@843 -- # xtrace_disable 00:30:52.963 07:51:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:53.222 [2024-10-07 07:51:52.614869] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:53.222 [2024-10-07 07:51:52.615048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90297 ] 00:30:53.480 [2024-10-07 07:51:52.801287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.480 [2024-10-07 07:51:53.026961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.414 07:51:53 blockdev_raid5f -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:30:54.414 07:51:53 blockdev_raid5f -- common/autotest_common.sh@867 -- # return 0 00:30:54.414 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:30:54.414 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:30:54.414 07:51:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:30:54.414 07:51:53 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.414 07:51:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 Malloc0 00:30:54.672 Malloc1 00:30:54.672 Malloc2 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@564 -- # xtrace_disable 00:30:54.672 07:51:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:54.672 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:30:54.931 07:51:54 blockdev_raid5f -- common/autotest_common.sh@592 -- # [[ 0 == 0 ]] 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "14e71831-e0ef-4f56-b6bb-29de40f1d91e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "14e71831-e0ef-4f56-b6bb-29de40f1d91e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "14e71831-e0ef-4f56-b6bb-29de40f1d91e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d42b24fb-7806-4e7e-aae0-0f17a0c601d4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c9bdca46-975a-416f-88b1-c5ce96a3a749",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "296197f2-421a-4f4f-8360-4757ccb3c96a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:30:54.931 07:51:54 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90297 00:30:54.931 07:51:54 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' -z 90297 ']' 00:30:54.931 07:51:54 blockdev_raid5f -- common/autotest_common.sh@957 -- # kill -0 90297 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@958 -- # uname 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 90297 00:30:54.932 killing process with pid 90297 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@971 -- # echo 'killing process with pid 90297' 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@972 -- # kill 90297 00:30:54.932 07:51:54 blockdev_raid5f -- common/autotest_common.sh@977 -- # wait 90297 00:30:58.217 07:51:57 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:58.217 07:51:57 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:30:58.217 07:51:57 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 7 -le 1 ']' 00:30:58.217 07:51:57 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:30:58.217 07:51:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:30:58.217 ************************************ 00:30:58.217 START TEST bdev_hello_world 00:30:58.217 ************************************ 00:30:58.217 07:51:57 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:30:58.217 [2024-10-07 07:51:57.411976] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:30:58.217 [2024-10-07 07:51:57.412123] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90364 ] 00:30:58.217 [2024-10-07 07:51:57.579458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.476 [2024-10-07 07:51:57.806166] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.043 [2024-10-07 07:51:58.405980] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:59.043 [2024-10-07 07:51:58.406040] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:30:59.043 [2024-10-07 07:51:58.406063] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:59.043 [2024-10-07 07:51:58.406610] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:59.043 [2024-10-07 07:51:58.406811] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:59.043 [2024-10-07 07:51:58.406834] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:59.043 [2024-10-07 07:51:58.406893] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:59.043 00:30:59.043 [2024-10-07 07:51:58.406917] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:00.944 00:31:00.944 real 0m2.722s 00:31:00.944 user 0m2.321s 00:31:00.944 sys 0m0.274s 00:31:00.944 07:52:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:00.944 07:52:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:00.944 ************************************ 00:31:00.944 END TEST bdev_hello_world 00:31:00.944 ************************************ 00:31:00.944 07:52:00 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:31:00.944 07:52:00 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:31:00.944 07:52:00 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:00.944 07:52:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:00.944 ************************************ 00:31:00.944 START TEST bdev_bounds 00:31:00.944 ************************************ 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # bdev_bounds '' 00:31:00.944 Process bdevio pid: 90416 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90416 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90416' 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90416 00:31:00.944 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # '[' -z 90416 ']' 00:31:00.945 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.945 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.945 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.945 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:00.945 07:52:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:00.945 [2024-10-07 07:52:00.214795] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:00.945 [2024-10-07 07:52:00.215762] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90416 ] 00:31:00.945 [2024-10-07 07:52:00.398778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.202 [2024-10-07 07:52:00.623396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.202 [2024-10-07 07:52:00.623508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.202 [2024-10-07 07:52:00.623515] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.767 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:01.767 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@867 -- # return 0 00:31:01.767 07:52:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:02.024 I/O targets: 00:31:02.024 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:02.024 00:31:02.024 00:31:02.024 CUnit - A unit testing framework for C - Version 2.1-3 00:31:02.024 http://cunit.sourceforge.net/ 00:31:02.024 00:31:02.024 00:31:02.024 Suite: bdevio tests on: raid5f 00:31:02.024 Test: blockdev write read block ...passed 00:31:02.024 Test: blockdev write zeroes read block ...passed 00:31:02.024 Test: blockdev write zeroes read no split ...passed 00:31:02.024 Test: blockdev write zeroes read split ...passed 00:31:02.024 Test: blockdev write zeroes read split partial ...passed 00:31:02.024 Test: blockdev reset ...passed 00:31:02.024 Test: blockdev write read 8 blocks ...passed 00:31:02.024 Test: blockdev write read size > 128k ...passed 00:31:02.024 Test: blockdev write read invalid size ...passed 00:31:02.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:02.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:02.025 Test: blockdev write read max offset ...passed 00:31:02.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:02.025 Test: blockdev writev readv 8 blocks ...passed 00:31:02.025 Test: blockdev writev readv 30 x 1block ...passed 00:31:02.025 Test: blockdev writev readv block ...passed 00:31:02.025 Test: blockdev writev readv size > 128k ...passed 00:31:02.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:02.025 Test: blockdev comparev and writev ...passed 00:31:02.025 Test: blockdev nvme passthru rw ...passed 00:31:02.025 Test: blockdev nvme passthru vendor specific ...passed 00:31:02.025 Test: blockdev nvme admin passthru ...passed 00:31:02.025 Test: blockdev copy ...passed 00:31:02.025 00:31:02.025 Run Summary: Type Total Ran Passed Failed Inactive 00:31:02.025 suites 1 1 n/a 0 0 00:31:02.025 tests 23 23 23 0 0 00:31:02.025 asserts 130 130 130 0 n/a 00:31:02.025 00:31:02.025 Elapsed time = 0.553 seconds 00:31:02.025 0 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90416 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' -z 90416 ']' 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # kill -0 90416 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # uname 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 90416 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # echo 'killing process with pid 90416' 00:31:02.282 killing process with pid 90416 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # kill 90416 00:31:02.282 07:52:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@977 -- # wait 90416 00:31:04.229 07:52:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:04.229 00:31:04.229 real 0m3.256s 00:31:04.229 user 0m7.825s 00:31:04.229 sys 0m0.455s 00:31:04.229 07:52:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:04.229 ************************************ 00:31:04.229 END TEST bdev_bounds 00:31:04.229 ************************************ 00:31:04.229 07:52:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:04.229 07:52:03 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:04.229 07:52:03 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 5 -le 1 ']' 00:31:04.229 07:52:03 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:04.229 07:52:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:04.229 ************************************ 00:31:04.229 START TEST bdev_nbd 00:31:04.229 ************************************ 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90477 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90477 /var/tmp/spdk-nbd.sock 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # '[' -z 90477 ']' 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local max_retries=100 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@841 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@843 -- # xtrace_disable 00:31:04.229 07:52:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:04.229 [2024-10-07 07:52:03.504539] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:04.229 [2024-10-07 07:52:03.504926] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.229 [2024-10-07 07:52:03.671881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.488 [2024-10-07 07:52:03.895094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@863 -- # (( i == 0 )) 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # return 0 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:05.054 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local i 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # break 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:05.313 1+0 records in 00:31:05.313 1+0 records out 00:31:05.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272669 s, 15.0 MB/s 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # size=4096 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # return 0 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:05.313 07:52:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:05.571 { 00:31:05.571 "nbd_device": "/dev/nbd0", 00:31:05.571 "bdev_name": "raid5f" 00:31:05.571 } 00:31:05.571 ]' 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:05.571 { 00:31:05.571 "nbd_device": "/dev/nbd0", 00:31:05.571 "bdev_name": "raid5f" 00:31:05.571 } 00:31:05.571 ]' 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:05.571 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.136 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.395 07:52:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:06.653 /dev/nbd0 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local nbd_name=nbd0 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local i 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # (( i = 1 )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # (( i <= 20 )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # grep -q -w nbd0 /proc/partitions 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # break 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # (( i = 1 )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # (( i <= 20 )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:06.653 1+0 records in 00:31:06.653 1+0 records out 00:31:06.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273131 s, 15.0 MB/s 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # size=4096 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # '[' 4096 '!=' 0 ']' 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # return 0 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.653 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:06.912 { 00:31:06.912 "nbd_device": "/dev/nbd0", 00:31:06.912 "bdev_name": "raid5f" 00:31:06.912 } 00:31:06.912 ]' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:06.912 { 00:31:06.912 "nbd_device": "/dev/nbd0", 00:31:06.912 "bdev_name": "raid5f" 00:31:06.912 } 00:31:06.912 ]' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:06.912 256+0 records in 00:31:06.912 256+0 records out 00:31:06.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710434 s, 148 MB/s 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:06.912 256+0 records in 00:31:06.912 256+0 records out 00:31:06.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353507 s, 29.7 MB/s 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.912 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:07.171 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:31:07.468 07:52:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:07.728 malloc_lvol_verify 00:31:07.728 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:07.987 f710e14f-4fd1-48bd-adc4-f99cdb7d8ecc 00:31:07.987 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:08.250 87a8ccdc-c6e8-44e6-b455-654a8c3c6289 00:31:08.250 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:08.511 /dev/nbd0 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:31:08.511 mke2fs 1.47.0 (5-Feb-2023) 00:31:08.511 Discarding device blocks: 0/4096 done 00:31:08.511 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:08.511 00:31:08.511 Allocating group tables: 0/1 done 00:31:08.511 Writing inode tables: 0/1 done 00:31:08.511 Creating journal (1024 blocks): done 00:31:08.511 Writing superblocks and filesystem accounting information: 0/1 done 00:31:08.511 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:08.511 07:52:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90477 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' -z 90477 ']' 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # kill -0 90477 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # uname 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' Linux = Linux ']' 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # ps --no-headers -o comm= 90477 00:31:08.770 killing process with pid 90477 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # process_name=reactor_0 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@963 -- # '[' reactor_0 = sudo ']' 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # echo 'killing process with pid 90477' 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # kill 90477 00:31:08.770 07:52:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@977 -- # wait 90477 00:31:10.680 07:52:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:10.680 00:31:10.680 real 0m6.582s 00:31:10.680 user 0m8.892s 00:31:10.680 sys 0m1.520s 00:31:10.680 07:52:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:10.680 07:52:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 ************************************ 00:31:10.680 END TEST bdev_nbd 00:31:10.680 ************************************ 00:31:10.680 07:52:10 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:31:10.680 07:52:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:31:10.680 07:52:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:31:10.680 07:52:10 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:31:10.680 07:52:10 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 3 -le 1 ']' 00:31:10.680 07:52:10 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:10.680 07:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 ************************************ 00:31:10.680 START TEST bdev_fio 00:31:10.680 ************************************ 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # fio_test_suite '' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:31:10.680 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1268 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1269 -- # local workload=verify 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1270 -- # local bdev_type=AIO 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1271 -- # local env_context= 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1272 -- # local fio_dir=/usr/src/fio 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1274 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # '[' -z verify ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # '[' -n '' ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1289 -- # cat 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # '[' verify == verify ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1302 -- # cat 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1311 -- # '[' AIO == AIO ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1312 -- # /usr/src/fio/fio --version 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1312 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # echo serialize_overlap=1 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1104 -- # '[' 11 -le 1 ']' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:10.680 ************************************ 00:31:10.680 START TEST bdev_fio_rw_verify 00:31:10.680 ************************************ 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1325 -- # local fio_dir=/usr/src/fio 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1327 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1327 -- # local sanitizers 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1328 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1329 -- # shift 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1331 -- # local asan_lib= 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1332 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # grep libasan 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # awk '{print $3}' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1334 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # break 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:10.680 07:52:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:10.939 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:10.939 fio-3.35 00:31:10.939 Starting 1 thread 00:31:23.163 00:31:23.163 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90688: Mon Oct 7 07:52:21 2024 00:31:23.163 read: IOPS=9471, BW=37.0MiB/s (38.8MB/s)(370MiB/10001msec) 00:31:23.163 slat (usec): min=19, max=293, avg=24.88, stdev= 5.21 00:31:23.163 clat (usec): min=11, max=841, avg=167.63, stdev=62.21 00:31:23.163 lat (usec): min=33, max=866, avg=192.51, stdev=63.21 00:31:23.163 clat percentiles (usec): 00:31:23.163 | 50.000th=[ 165], 99.000th=[ 297], 99.900th=[ 420], 99.990th=[ 693], 00:31:23.163 | 99.999th=[ 840] 00:31:23.163 write: IOPS=9918, BW=38.7MiB/s (40.6MB/s)(383MiB/9877msec); 0 zone resets 00:31:23.163 slat (usec): min=9, max=251, avg=21.66, stdev= 5.30 00:31:23.163 clat (usec): min=46, max=941, avg=387.07, stdev=60.71 00:31:23.163 lat (usec): min=66, max=1013, avg=408.73, stdev=62.28 00:31:23.163 clat percentiles (usec): 00:31:23.163 | 50.000th=[ 388], 99.000th=[ 523], 99.900th=[ 750], 99.990th=[ 898], 00:31:23.163 | 99.999th=[ 938] 00:31:23.163 bw ( KiB/s): min=36048, max=44520, per=99.04%, avg=39294.68, stdev=1974.77, samples=19 00:31:23.163 iops : min= 9012, max=11130, avg=9823.58, stdev=493.72, samples=19 00:31:23.163 lat (usec) : 20=0.01%, 50=0.01%, 100=8.18%, 250=35.77%, 500=54.75% 00:31:23.163 lat (usec) : 750=1.25%, 1000=0.05% 00:31:23.163 cpu : usr=98.16%, sys=0.82%, ctx=29, majf=0, minf=8065 00:31:23.163 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.163 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.163 issued rwts: total=94727,97967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.163 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:23.163 00:31:23.163 Run status group 0 (all jobs): 00:31:23.163 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=370MiB (388MB), run=10001-10001msec 00:31:23.163 WRITE: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=383MiB (401MB), run=9877-9877msec 00:31:24.100 ----------------------------------------------------- 00:31:24.100 Suppressions used: 00:31:24.100 count bytes template 00:31:24.100 1 7 /usr/src/fio/parse.c 00:31:24.100 309 29664 /usr/src/fio/iolog.c 00:31:24.100 1 8 libtcmalloc_minimal.so 00:31:24.100 1 904 libcrypto.so 00:31:24.100 ----------------------------------------------------- 00:31:24.100 00:31:24.100 00:31:24.100 real 0m13.228s 00:31:24.100 user 0m13.590s 00:31:24.100 sys 0m0.949s 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:24.100 ************************************ 00:31:24.100 END TEST bdev_fio_rw_verify 00:31:24.100 ************************************ 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1268 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1269 -- # local workload=trim 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1270 -- # local bdev_type= 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1271 -- # local env_context= 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1272 -- # local fio_dir=/usr/src/fio 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1274 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # '[' -z trim ']' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # '[' -n '' ']' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1289 -- # cat 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # '[' trim == verify ']' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # '[' trim == trim ']' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # echo rw=trimwrite 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "14e71831-e0ef-4f56-b6bb-29de40f1d91e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "14e71831-e0ef-4f56-b6bb-29de40f1d91e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "14e71831-e0ef-4f56-b6bb-29de40f1d91e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d42b24fb-7806-4e7e-aae0-0f17a0c601d4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c9bdca46-975a-416f-88b1-c5ce96a3a749",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "296197f2-421a-4f4f-8360-4757ccb3c96a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:31:24.100 /home/vagrant/spdk_repo/spdk 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:31:24.100 00:31:24.100 real 0m13.464s 00:31:24.100 user 0m13.685s 00:31:24.100 sys 0m1.068s 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:24.100 07:52:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:24.100 ************************************ 00:31:24.100 END TEST bdev_fio 00:31:24.100 ************************************ 00:31:24.100 07:52:23 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:24.100 07:52:23 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:24.100 07:52:23 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 16 -le 1 ']' 00:31:24.100 07:52:23 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:24.100 07:52:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:24.100 ************************************ 00:31:24.100 START TEST bdev_verify 00:31:24.100 ************************************ 00:31:24.100 07:52:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:24.100 [2024-10-07 07:52:23.656035] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:24.100 [2024-10-07 07:52:23.656184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90852 ] 00:31:24.360 [2024-10-07 07:52:23.826413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.619 [2024-10-07 07:52:24.111913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.619 [2024-10-07 07:52:24.111942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.186 Running I/O for 5 seconds... 00:31:30.347 9827.00 IOPS, 38.39 MiB/s 9881.50 IOPS, 38.60 MiB/s 9716.67 IOPS, 37.96 MiB/s 9715.00 IOPS, 37.95 MiB/s 9738.20 IOPS, 38.04 MiB/s 00:31:30.347 Latency(us) 00:31:30.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.347 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:30.347 Verification LBA range: start 0x0 length 0x2000 00:31:30.347 raid5f : 5.01 4105.55 16.04 0.00 0.00 46855.84 210.65 42941.68 00:31:30.347 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:30.347 Verification LBA range: start 0x2000 length 0x2000 00:31:30.347 raid5f : 5.02 5627.80 21.98 0.00 0.00 34143.92 156.04 27462.70 00:31:30.347 =================================================================================================================== 00:31:30.347 Total : 9733.35 38.02 0.00 0.00 39500.82 156.04 42941.68 00:31:32.247 00:31:32.247 real 0m8.074s 00:31:32.247 user 0m14.537s 00:31:32.247 sys 0m0.322s 00:31:32.247 07:52:31 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:32.247 07:52:31 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:32.247 ************************************ 00:31:32.247 END TEST bdev_verify 00:31:32.247 ************************************ 00:31:32.247 07:52:31 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:32.247 07:52:31 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 16 -le 1 ']' 00:31:32.247 07:52:31 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:32.247 07:52:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:32.247 ************************************ 00:31:32.247 START TEST bdev_verify_big_io 00:31:32.247 ************************************ 00:31:32.247 07:52:31 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:32.505 [2024-10-07 07:52:31.825973] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:32.505 [2024-10-07 07:52:31.826153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90956 ] 00:31:32.505 [2024-10-07 07:52:32.010070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:32.768 [2024-10-07 07:52:32.304573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.768 [2024-10-07 07:52:32.304616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.715 Running I/O for 5 seconds... 00:31:38.723 442.00 IOPS, 27.62 MiB/s 570.50 IOPS, 35.66 MiB/s 592.00 IOPS, 37.00 MiB/s 602.75 IOPS, 37.67 MiB/s 609.20 IOPS, 38.08 MiB/s 00:31:38.723 Latency(us) 00:31:38.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.723 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:38.723 Verification LBA range: start 0x0 length 0x200 00:31:38.723 raid5f : 5.25 265.91 16.62 0.00 0.00 11662815.06 230.16 539267.66 00:31:38.723 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:38.723 Verification LBA range: start 0x200 length 0x200 00:31:38.723 raid5f : 5.24 363.88 22.74 0.00 0.00 8511939.57 335.48 423424.98 00:31:38.723 =================================================================================================================== 00:31:38.723 Total : 629.79 39.36 0.00 0.00 9845002.27 230.16 539267.66 00:31:41.257 00:31:41.257 real 0m8.560s 00:31:41.257 user 0m15.276s 00:31:41.257 sys 0m0.448s 00:31:41.257 07:52:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:41.257 07:52:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 ************************************ 00:31:41.257 END TEST bdev_verify_big_io 00:31:41.257 ************************************ 00:31:41.257 07:52:40 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:41.257 07:52:40 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 13 -le 1 ']' 00:31:41.257 07:52:40 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:41.257 07:52:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:41.257 ************************************ 00:31:41.257 START TEST bdev_write_zeroes 00:31:41.257 ************************************ 00:31:41.257 07:52:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:41.257 [2024-10-07 07:52:40.451741] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:41.257 [2024-10-07 07:52:40.451894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91060 ] 00:31:41.257 [2024-10-07 07:52:40.617767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.515 [2024-10-07 07:52:40.873123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.082 Running I/O for 1 seconds... 00:31:43.020 23175.00 IOPS, 90.53 MiB/s 00:31:43.020 Latency(us) 00:31:43.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.020 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:43.020 raid5f : 1.01 23131.42 90.36 0.00 0.00 5512.25 1654.00 8550.89 00:31:43.020 =================================================================================================================== 00:31:43.020 Total : 23131.42 90.36 0.00 0.00 5512.25 1654.00 8550.89 00:31:44.923 00:31:44.923 real 0m3.784s 00:31:44.923 user 0m3.363s 00:31:44.923 sys 0m0.287s 00:31:44.923 07:52:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:44.923 07:52:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:44.923 ************************************ 00:31:44.923 END TEST bdev_write_zeroes 00:31:44.923 ************************************ 00:31:44.923 07:52:44 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:44.923 07:52:44 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 13 -le 1 ']' 00:31:44.923 07:52:44 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:44.923 07:52:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:44.923 ************************************ 00:31:44.923 START TEST bdev_json_nonenclosed 00:31:44.923 ************************************ 00:31:44.923 07:52:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:44.923 [2024-10-07 07:52:44.264612] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:44.923 [2024-10-07 07:52:44.264748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91119 ] 00:31:44.923 [2024-10-07 07:52:44.428417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.181 [2024-10-07 07:52:44.655924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.181 [2024-10-07 07:52:44.656031] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:45.181 [2024-10-07 07:52:44.656055] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:45.181 [2024-10-07 07:52:44.656068] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:45.747 00:31:45.747 real 0m0.904s 00:31:45.747 user 0m0.652s 00:31:45.747 sys 0m0.147s 00:31:45.747 07:52:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:45.747 07:52:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:45.747 ************************************ 00:31:45.747 END TEST bdev_json_nonenclosed 00:31:45.747 ************************************ 00:31:45.747 07:52:45 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:45.747 07:52:45 blockdev_raid5f -- common/autotest_common.sh@1104 -- # '[' 13 -le 1 ']' 00:31:45.747 07:52:45 blockdev_raid5f -- common/autotest_common.sh@1110 -- # xtrace_disable 00:31:45.747 07:52:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:45.747 ************************************ 00:31:45.747 START TEST bdev_json_nonarray 00:31:45.747 ************************************ 00:31:45.747 07:52:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:45.747 [2024-10-07 07:52:45.243688] Starting SPDK v25.01-pre git sha1 70750b651 / DPDK 24.03.0 initialization... 00:31:45.747 [2024-10-07 07:52:45.243826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91150 ] 00:31:46.006 [2024-10-07 07:52:45.405060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.265 [2024-10-07 07:52:45.628811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.265 [2024-10-07 07:52:45.628947] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:46.265 [2024-10-07 07:52:45.628990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:46.265 [2024-10-07 07:52:45.629004] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:46.524 00:31:46.524 real 0m0.906s 00:31:46.524 user 0m0.659s 00:31:46.524 sys 0m0.142s 00:31:46.524 07:52:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:46.524 07:52:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:46.524 ************************************ 00:31:46.524 END TEST bdev_json_nonarray 00:31:46.524 ************************************ 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:31:46.784 07:52:46 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:31:46.784 00:31:46.784 real 0m53.894s 00:31:46.784 user 1m12.415s 00:31:46.784 sys 0m5.759s 00:31:46.784 07:52:46 blockdev_raid5f -- common/autotest_common.sh@1129 -- # xtrace_disable 00:31:46.784 07:52:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:31:46.784 ************************************ 00:31:46.784 END TEST blockdev_raid5f 00:31:46.784 ************************************ 00:31:46.784 07:52:46 -- spdk/autotest.sh@194 -- # uname -s 00:31:46.784 07:52:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@256 -- # timing_exit lib 00:31:46.784 07:52:46 -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:46.784 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:31:46.784 07:52:46 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:46.784 07:52:46 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:46.784 07:52:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:46.784 07:52:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:46.784 07:52:46 -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:46.784 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:31:46.784 07:52:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:46.784 07:52:46 -- common/autotest_common.sh@1380 -- # local autotest_es=0 00:31:46.784 07:52:46 -- common/autotest_common.sh@1381 -- # xtrace_disable 00:31:46.784 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:31:48.692 INFO: APP EXITING 00:31:48.692 INFO: killing all VMs 00:31:48.692 INFO: killing vhost app 00:31:48.692 INFO: EXIT DONE 00:31:49.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:49.261 Waiting for block devices as requested 00:31:49.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:49.261 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:50.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:50.198 Cleaning 00:31:50.198 Removing: /var/run/dpdk/spdk0/config 00:31:50.198 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:50.198 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:50.198 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:50.198 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:50.198 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:50.198 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:50.198 Removing: /dev/shm/spdk_tgt_trace.pid56515 00:31:50.198 Removing: /var/run/dpdk/spdk0 00:31:50.198 Removing: /var/run/dpdk/spdk_pid56257 00:31:50.198 Removing: /var/run/dpdk/spdk_pid56515 00:31:50.198 Removing: /var/run/dpdk/spdk_pid56766 00:31:50.198 Removing: /var/run/dpdk/spdk_pid56876 00:31:50.198 Removing: /var/run/dpdk/spdk_pid56943 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57082 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57100 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57339 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57463 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57593 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57732 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57857 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57902 00:31:50.198 Removing: /var/run/dpdk/spdk_pid57944 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58026 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58159 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58642 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58729 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58818 00:31:50.198 Removing: /var/run/dpdk/spdk_pid58840 00:31:50.198 Removing: /var/run/dpdk/spdk_pid59021 00:31:50.198 Removing: /var/run/dpdk/spdk_pid59037 00:31:50.198 Removing: /var/run/dpdk/spdk_pid59213 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59234 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59309 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59333 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59402 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59426 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59649 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59691 00:31:50.457 Removing: /var/run/dpdk/spdk_pid59786 00:31:50.457 Removing: /var/run/dpdk/spdk_pid61216 00:31:50.457 Removing: /var/run/dpdk/spdk_pid61422 00:31:50.457 Removing: /var/run/dpdk/spdk_pid61572 00:31:50.457 Removing: /var/run/dpdk/spdk_pid62232 00:31:50.457 Removing: /var/run/dpdk/spdk_pid62439 00:31:50.457 Removing: /var/run/dpdk/spdk_pid62589 00:31:50.457 Removing: /var/run/dpdk/spdk_pid63239 00:31:50.457 Removing: /var/run/dpdk/spdk_pid63575 00:31:50.457 Removing: /var/run/dpdk/spdk_pid63720 00:31:50.457 Removing: /var/run/dpdk/spdk_pid65128 00:31:50.457 Removing: /var/run/dpdk/spdk_pid65386 00:31:50.457 Removing: /var/run/dpdk/spdk_pid65532 00:31:50.457 Removing: /var/run/dpdk/spdk_pid66941 00:31:50.457 Removing: /var/run/dpdk/spdk_pid67201 00:31:50.457 Removing: /var/run/dpdk/spdk_pid67352 00:31:50.457 Removing: /var/run/dpdk/spdk_pid68754 00:31:50.457 Removing: /var/run/dpdk/spdk_pid69205 00:31:50.457 Removing: /var/run/dpdk/spdk_pid69363 00:31:50.457 Removing: /var/run/dpdk/spdk_pid70870 00:31:50.457 Removing: /var/run/dpdk/spdk_pid71135 00:31:50.457 Removing: /var/run/dpdk/spdk_pid71286 00:31:50.457 Removing: /var/run/dpdk/spdk_pid72790 00:31:50.457 Removing: /var/run/dpdk/spdk_pid73055 00:31:50.457 Removing: /var/run/dpdk/spdk_pid73206 00:31:50.457 Removing: /var/run/dpdk/spdk_pid74705 00:31:50.457 Removing: /var/run/dpdk/spdk_pid75199 00:31:50.457 Removing: /var/run/dpdk/spdk_pid75353 00:31:50.457 Removing: /var/run/dpdk/spdk_pid75497 00:31:50.457 Removing: /var/run/dpdk/spdk_pid75937 00:31:50.457 Removing: /var/run/dpdk/spdk_pid76678 00:31:50.457 Removing: /var/run/dpdk/spdk_pid77066 00:31:50.457 Removing: /var/run/dpdk/spdk_pid77760 00:31:50.457 Removing: /var/run/dpdk/spdk_pid78212 00:31:50.457 Removing: /var/run/dpdk/spdk_pid78979 00:31:50.457 Removing: /var/run/dpdk/spdk_pid79394 00:31:50.457 Removing: /var/run/dpdk/spdk_pid81372 00:31:50.457 Removing: /var/run/dpdk/spdk_pid81822 00:31:50.457 Removing: /var/run/dpdk/spdk_pid82269 00:31:50.457 Removing: /var/run/dpdk/spdk_pid84365 00:31:50.457 Removing: /var/run/dpdk/spdk_pid84850 00:31:50.457 Removing: /var/run/dpdk/spdk_pid85378 00:31:50.457 Removing: /var/run/dpdk/spdk_pid86445 00:31:50.457 Removing: /var/run/dpdk/spdk_pid86772 00:31:50.457 Removing: /var/run/dpdk/spdk_pid87716 00:31:50.457 Removing: /var/run/dpdk/spdk_pid88046 00:31:50.457 Removing: /var/run/dpdk/spdk_pid88992 00:31:50.457 Removing: /var/run/dpdk/spdk_pid89315 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90003 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90297 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90364 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90416 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90673 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90852 00:31:50.457 Removing: /var/run/dpdk/spdk_pid90956 00:31:50.457 Removing: /var/run/dpdk/spdk_pid91060 00:31:50.457 Removing: /var/run/dpdk/spdk_pid91119 00:31:50.457 Removing: /var/run/dpdk/spdk_pid91150 00:31:50.457 Clean 00:31:50.728 07:52:50 -- common/autotest_common.sh@1439 -- # return 0 00:31:50.728 07:52:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:31:50.728 07:52:50 -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:50.729 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:31:50.729 07:52:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:31:50.729 07:52:50 -- common/autotest_common.sh@733 -- # xtrace_disable 00:31:50.729 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:31:50.729 07:52:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:50.729 07:52:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:50.729 07:52:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:50.729 07:52:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:31:50.729 07:52:50 -- spdk/autotest.sh@394 -- # hostname 00:31:50.729 07:52:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:51.001 geninfo: WARNING: invalid characters removed from testname! 00:32:17.546 07:53:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:17.546 07:53:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:19.453 07:53:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:21.989 07:53:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:24.519 07:53:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:26.418 07:53:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:28.947 07:53:27 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:28.947 07:53:28 -- common/autotest_common.sh@1625 -- $ [[ y == y ]] 00:32:28.947 07:53:28 -- common/autotest_common.sh@1626 -- $ awk '{print $NF}' 00:32:28.947 07:53:28 -- common/autotest_common.sh@1626 -- $ lcov --version 00:32:28.947 07:53:28 -- common/autotest_common.sh@1626 -- $ lt 1.15 2 00:32:28.947 07:53:28 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:32:28.947 07:53:28 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:32:28.947 07:53:28 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:32:28.947 07:53:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:32:28.947 07:53:28 -- scripts/common.sh@336 -- $ read -ra ver1 00:32:28.947 07:53:28 -- scripts/common.sh@337 -- $ IFS=.-: 00:32:28.947 07:53:28 -- scripts/common.sh@337 -- $ read -ra ver2 00:32:28.947 07:53:28 -- scripts/common.sh@338 -- $ local 'op=<' 00:32:28.947 07:53:28 -- scripts/common.sh@340 -- $ ver1_l=2 00:32:28.947 07:53:28 -- scripts/common.sh@341 -- $ ver2_l=1 00:32:28.947 07:53:28 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:32:28.947 07:53:28 -- scripts/common.sh@344 -- $ case "$op" in 00:32:28.947 07:53:28 -- scripts/common.sh@345 -- $ : 1 00:32:28.947 07:53:28 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:32:28.947 07:53:28 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.947 07:53:28 -- scripts/common.sh@365 -- $ decimal 1 00:32:28.947 07:53:28 -- scripts/common.sh@353 -- $ local d=1 00:32:28.947 07:53:28 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:32:28.947 07:53:28 -- scripts/common.sh@355 -- $ echo 1 00:32:28.947 07:53:28 -- scripts/common.sh@365 -- $ ver1[v]=1 00:32:28.947 07:53:28 -- scripts/common.sh@366 -- $ decimal 2 00:32:28.947 07:53:28 -- scripts/common.sh@353 -- $ local d=2 00:32:28.947 07:53:28 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:32:28.947 07:53:28 -- scripts/common.sh@355 -- $ echo 2 00:32:28.947 07:53:28 -- scripts/common.sh@366 -- $ ver2[v]=2 00:32:28.947 07:53:28 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:32:28.947 07:53:28 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:32:28.947 07:53:28 -- scripts/common.sh@368 -- $ return 0 00:32:28.947 07:53:28 -- common/autotest_common.sh@1627 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.947 07:53:28 -- common/autotest_common.sh@1639 -- $ export 'LCOV_OPTS= 00:32:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.947 --rc genhtml_branch_coverage=1 00:32:28.947 --rc genhtml_function_coverage=1 00:32:28.947 --rc genhtml_legend=1 00:32:28.947 --rc geninfo_all_blocks=1 00:32:28.947 --rc geninfo_unexecuted_blocks=1 00:32:28.947 00:32:28.947 ' 00:32:28.947 07:53:28 -- common/autotest_common.sh@1639 -- $ LCOV_OPTS=' 00:32:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.947 --rc genhtml_branch_coverage=1 00:32:28.947 --rc genhtml_function_coverage=1 00:32:28.947 --rc genhtml_legend=1 00:32:28.947 --rc geninfo_all_blocks=1 00:32:28.947 --rc geninfo_unexecuted_blocks=1 00:32:28.947 00:32:28.947 ' 00:32:28.947 07:53:28 -- common/autotest_common.sh@1640 -- $ export 'LCOV=lcov 00:32:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.947 --rc genhtml_branch_coverage=1 00:32:28.947 --rc genhtml_function_coverage=1 00:32:28.947 --rc genhtml_legend=1 00:32:28.947 --rc geninfo_all_blocks=1 00:32:28.947 --rc geninfo_unexecuted_blocks=1 00:32:28.947 00:32:28.947 ' 00:32:28.947 07:53:28 -- common/autotest_common.sh@1640 -- $ LCOV='lcov 00:32:28.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.947 --rc genhtml_branch_coverage=1 00:32:28.947 --rc genhtml_function_coverage=1 00:32:28.947 --rc genhtml_legend=1 00:32:28.947 --rc geninfo_all_blocks=1 00:32:28.947 --rc geninfo_unexecuted_blocks=1 00:32:28.947 00:32:28.947 ' 00:32:28.947 07:53:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:28.947 07:53:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:32:28.947 07:53:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:28.947 07:53:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.947 07:53:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.947 07:53:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.947 07:53:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.947 07:53:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.947 07:53:28 -- paths/export.sh@5 -- $ export PATH 00:32:28.947 07:53:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.947 07:53:28 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:28.947 07:53:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:32:28.947 07:53:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728287608.XXXXXX 00:32:28.948 07:53:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728287608.6RuDIK 00:32:28.948 07:53:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:32:28.948 07:53:28 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:32:28.948 07:53:28 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:28.948 07:53:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:28.948 07:53:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:28.948 07:53:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:32:28.948 07:53:28 -- common/autotest_common.sh@410 -- $ xtrace_disable 00:32:28.948 07:53:28 -- common/autotest_common.sh@10 -- $ set +x 00:32:28.948 07:53:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:32:28.948 07:53:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:32:28.948 07:53:28 -- pm/common@17 -- $ local monitor 00:32:28.948 07:53:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:28.948 07:53:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:28.948 07:53:28 -- pm/common@25 -- $ sleep 1 00:32:28.948 07:53:28 -- pm/common@21 -- $ date +%s 00:32:28.948 07:53:28 -- pm/common@21 -- $ date +%s 00:32:28.948 07:53:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728287608 00:32:28.948 07:53:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728287608 00:32:28.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728287608_collect-vmstat.pm.log 00:32:28.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728287608_collect-cpu-load.pm.log 00:32:29.883 07:53:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:32:29.883 07:53:29 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:32:29.883 07:53:29 -- spdk/autopackage.sh@14 -- $ timing_finish 00:32:29.883 07:53:29 -- common/autotest_common.sh@739 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:29.883 07:53:29 -- common/autotest_common.sh@740 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:29.883 07:53:29 -- common/autotest_common.sh@743 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:29.883 07:53:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:29.883 07:53:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:29.883 07:53:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:29.883 07:53:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:29.883 07:53:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:29.883 07:53:29 -- pm/common@44 -- $ pid=92675 00:32:29.883 07:53:29 -- pm/common@50 -- $ kill -TERM 92675 00:32:29.883 07:53:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:29.883 07:53:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:29.883 07:53:29 -- pm/common@44 -- $ pid=92676 00:32:29.883 07:53:29 -- pm/common@50 -- $ kill -TERM 92676 00:32:29.883 + [[ -n 5260 ]] 00:32:29.883 + sudo kill 5260 00:32:29.889 Pausing (Preparing for shutdown) 01:03:05.774 Resuming build at Mon Oct 07 08:24:05 UTC 2024 after Jenkins restart 01:03:18.189 Waiting for reconnection of VM-host-SM4 before proceeding with build 01:03:18.339 Timeout set to expire in 38 min 01:03:18.341 Ready to run at Mon Oct 07 08:24:17 UTC 2024 01:03:18.355 [Pipeline] } 01:03:18.369 [Pipeline] // timeout 01:03:18.377 [Pipeline] } 01:03:18.390 [Pipeline] // stage 01:03:18.396 [Pipeline] } 01:03:18.408 [Pipeline] // catchError 01:03:18.424 [Pipeline] stage 01:03:18.426 [Pipeline] { (Stop VM) 01:03:18.449 [Pipeline] sh 01:03:18.742 + vagrant halt 01:03:22.068 ==> default: Halting domain... 01:03:28.689 [Pipeline] sh 01:03:28.977 + vagrant destroy -f 01:03:32.270 ==> default: Removing domain... 01:03:32.284 [Pipeline] sh 01:03:32.571 + mv output /var/jenkins/workspace/raid-vg-autotest/output 01:03:32.580 [Pipeline] } 01:03:32.593 [Pipeline] // stage 01:03:32.598 [Pipeline] } 01:03:32.611 [Pipeline] // dir 01:03:32.615 [Pipeline] } 01:03:32.629 [Pipeline] // wrap 01:03:32.633 [Pipeline] } 01:03:32.645 [Pipeline] // catchError 01:03:32.653 [Pipeline] stage 01:03:32.655 [Pipeline] { (Epilogue) 01:03:32.667 [Pipeline] sh 01:03:32.952 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:38.231 [Pipeline] catchError 01:03:38.233 [Pipeline] { 01:03:38.243 [Pipeline] sh 01:03:38.523 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:38.782 Artifacts sizes are good 01:03:38.790 [Pipeline] } 01:03:38.802 [Pipeline] // catchError 01:03:38.810 [Pipeline] archiveArtifacts 01:03:38.817 Archiving artifacts 01:03:38.967 [Pipeline] cleanWs 01:03:39.002 [WS-CLEANUP] Deleting project workspace... 01:03:39.002 [WS-CLEANUP] Deferred wipeout is used... 01:03:39.007 [WS-CLEANUP] done 01:03:39.009 [Pipeline] } 01:03:39.023 [Pipeline] // stage 01:03:39.027 [Pipeline] } 01:03:39.042 [Pipeline] // node 01:03:39.046 [Pipeline] End of Pipeline 01:03:39.063 Finished: SUCCESS